Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
sequencelengths
1
1.84k
sha
null
created_at
stringlengths
25
25
reinforcement-learning
null
# **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
{"tags": ["CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "Reinforce-CartPoleV1", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "CartPole-v1", "type": "CartPole-v1"}, "metrics": [{"type": "mean_reward", "value": "133.80 +/- 10.01", "name": "mean_reward", "verified": false}]}]}]}
Joalbom14/Reinforce-CartPoleV1
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
null
2024-05-02T02:08:39+00:00
null
gpt-neox
# joeshmoethefunnyone/pythia-70m-Q8_0-GGUF This model was converted to GGUF format from [`EleutherAI/pythia-70m`](https://huggingface.co/EleutherAI/pythia-70m) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/EleutherAI/pythia-70m) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo joeshmoethefunnyone/pythia-70m-Q8_0-GGUF --model pythia-70m.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo joeshmoethefunnyone/pythia-70m-Q8_0-GGUF --model pythia-70m.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m pythia-70m.Q8_0.gguf -n 128 ```
{"language": ["en"], "license": "apache-2.0", "library_name": "gpt-neox", "tags": ["pytorch", "causal-lm", "pythia", "llama-cpp", "gguf-my-repo"], "datasets": ["EleutherAI/pile"]}
joeshmoethefunnyone/pythia-70m-Q8_0-GGUF
null
[ "gpt-neox", "gguf", "pytorch", "causal-lm", "pythia", "llama-cpp", "gguf-my-repo", "en", "dataset:EleutherAI/pile", "license:apache-2.0", "region:us" ]
null
2024-05-02T02:09:06+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
disroom/test_gemma-1.1-7b-it-0.01M
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:11:26+00:00
null
transformers
# Uploaded model - **Developed by:** jurieyel - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
jurieyel/77hr-tuned-Llama3-8b-4bit-GGUF-q4_K_M
null
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:14:12+00:00
null
null
{}
wuzhongyanqiu/repo_name
null
[ "region:us" ]
null
2024-05-02T02:14:31+00:00
null
null
{}
skumar310/NER-TASK
null
[ "region:us" ]
null
2024-05-02T02:15:33+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: JackFram/llama-68m model_type: LlamaForCausalLM tokenizer_type: AutoTokenizer load_in_8bit: false load_in_4bit: false strict: false datasets: - path: /data/data/final_set_cleaned/train/ type: sharegpt conversation: chatml - path: /data/data/map_coig_cqia.jsonl type: sharegpt conversation: chatml - path: /data/data/ruozhiba.jsonl type: sharegpt conversation: chatml dataset_prepared_path: last_run_prepared val_set_size: 0 output_dir: ./out sequence_len: 4096 sample_packing: true pad_to_sequence_len: true wandb_project: wandb_entity: wandb_watch: wandb_name: wandb_log_model: gradient_accumulation_steps: 8 micro_batch_size: 4 num_epochs: 2 optimizer: paged_adamw_8bit lr_scheduler: cosine learning_rate: 2e-5 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false early_stopping_patience: resume_from_checkpoint: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 100 evals_per_epoch: 0 eval_table_size: saves_per_epoch: 4 debug: deepspeed: deepspeed/zero2.json weight_decay: 0.0 fsdp: fsdp_config: default_system_message: "You are a helpful assistant." special_tokens: eos_token: "<|im_end|>" pad_token: "<|end_of_text|>" tokens: - "<|im_start|>" - "<|im_end|>" ``` </details><br> # data/llama-68m-20240502-0037 This model is a fine-tuned version of [JackFram/llama-68m](https://huggingface.co/JackFram/llama-68m) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 6 - gradient_accumulation_steps: 8 - total_train_batch_size: 192 - total_eval_batch_size: 24 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.40.1 - Pytorch 2.0.1+cu118 - Datasets 2.15.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "JackFram/llama-68m", "model-index": [{"name": "data/llama-68m-20240502-0037", "results": []}]}
lu-vae/llama-68m-fft
null
[ "transformers", "pytorch", "llama", "text-generation", "generated_from_trainer", "base_model:JackFram/llama-68m", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T02:16:17+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # toxic_generator_model This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.3880 - Accuracy: 0.4552 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 40.0 ### Training results ### Framework versions - Transformers 4.40.1 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "gpt2", "model-index": [{"name": "toxic_generator_model", "results": []}]}
theastronuts/toxic_sentence_generator
null
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:gpt2", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T02:16:55+00:00
null
transformers
{}
da-xiong/tku2024
null
[ "transformers", "endpoints_compatible", "has_space", "region:us" ]
null
2024-05-02T02:18:05+00:00
null
null
{"license": "mit"}
wuzhongyanqiu/firstmodel
null
[ "license:mit", "region:us" ]
null
2024-05-02T02:18:51+00:00
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
astro21/pix2struct-base-coco-f
null
[ "transformers", "safetensors", "pix2struct", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:19:03+00:00
null
null
# Laura - Un sistema neuronal rรกpido y local de texto a voz que suena genial. ![Laura](Laura.jpeg) Bienvenido al repositorio del modelo de texto a voz Laura, basado en la arquitectura ONNX Runtime. Este modelo ha sido entrenado utilizando Piper, una plataforma de aprendizaje automรกtico de รบltima generaciรณn. ## Acerca de Laura Laura es un pequeรฑo modelo de texto a voz diseรฑado para ofrecer una conversiรณn precisa y natural de texto a voz. Se basa en la arquitectura ONNX Runtime para un rendimiento รณptimo y eficiente. El modelo ha sido entrenado usando Piper, asegurando un alto nivel de calidad en la conversiรณn de texto a voz. ## Caracterรญsticas - **Arquitectura ONNX Runtime**: Proporciona un rendimiento eficiente y de alta calidad. - **Entrenado con Piper**: Garantiza una conversiรณn de texto a voz precisa y natural. - **Fรกcil de usar**: Simple de integrar en tus proyectos de texto a voz. ## Cรณmo probar Laura Puedes probar el modelo Laura accediendo a [tts.hircoir.eu.org](http://tts.hircoir.eu.org). Allรญ podrรกs experimentar la calidad de la conversiรณn de texto a voz que ofrece Laura. # Licencia de uso para el modelo de voz Esta licencia establece las condiciones bajo las cuales se permite el uso de este modelo de voz. Al utilizar este modelo de voz, aceptas cumplir con estas condiciones. ## Permisos - **Uso general**: Se permite el uso del modelo de voz para cualquier propรณsito. - **Crรฉdito**: Dar crรฉdito al creador del modelo es opcional, pero se agradece. ## Restricciones - **Prohibido el uso en servicios de pago**: Estรก estrictamente prohibido utilizar el modelo de voz en servicios de conversiรณn de texto a voz que sean de pago o en cualquier otro servicio que implique un cobro a los usuarios por el uso de este modelo de voz. - **No se permite el uso comercial**: No estรก permitido utilizar el modelo de voz de manera comercial si el uso implica un pago de parte de los usuarios finales. ## Propiedad intelectual - Esta licencia no concede ningรบn derecho de propiedad intelectual sobre el modelo de voz. Todos los derechos de autor y propiedad intelectual sobre el modelo de voz estรกn reservados por el creador. ## Incumplimiento El incumplimiento de estas condiciones puede resultar en la revocaciรณn de tu derecho a usar el modelo de voz. ## Contacto Si tienes alguna pregunta o inquietud acerca de esta licencia o su uso, puedes ponerte en contacto con el creador. --- Al usar este modelo de voz, aceptas cumplir con todas las condiciones anteriores. Gracias por respetar estas pautas.
{"language": ["es"], "license": "cc-by-nc-4.0", "tags": ["piper", "onnx", "HirCoir", "es_MX"]}
HirCoir/Piper-TTS-Laura
null
[ "onnx", "piper", "HirCoir", "es_MX", "es", "license:cc-by-nc-4.0", "region:us" ]
null
2024-05-02T02:20:38+00:00
null
null
{}
Phanh2532/GAMA-Code-generator-v2.0
null
[ "region:us" ]
null
2024-05-02T02:21:29+00:00
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Luluuu/0502_T5_baseline_checkpoint_1000
null
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T02:24:10+00:00
null
null
{}
apez1267/Lily-Cybersecurity-7B-v0.2-GGUF
null
[ "gguf", "region:us" ]
null
2024-05-02T02:24:29+00:00
reinforcement-learning
transformers
# TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="pkbiswas//tmp/tmppe80ooeh/pkbiswas/Phi-3-Detoxified-PPO-LoRa") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("pkbiswas//tmp/tmppe80ooeh/pkbiswas/Phi-3-Detoxified-PPO-LoRa") model = AutoModelForCausalLMWithValueHead.from_pretrained("pkbiswas//tmp/tmppe80ooeh/pkbiswas/Phi-3-Detoxified-PPO-LoRa") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
{"license": "apache-2.0", "tags": ["trl", "ppo", "transformers", "reinforcement-learning"]}
pkbiswas/Phi-3-Detoxified-PPO-LoRa
null
[ "transformers", "pytorch", "safetensors", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:25:31+00:00
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "base_model": "beomi/KoAlpaca-Polyglot-5.8B"}
lee0901/synerEngine_v1
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:beomi/KoAlpaca-Polyglot-5.8B", "region:us" ]
null
2024-05-02T02:27:10+00:00
text-to-image
diffusers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿงจ diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "diffusers"}
rubbrband/aingdiffusionXL_v125
null
[ "diffusers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
null
2024-05-02T02:27:16+00:00
null
null
{}
huynq3Cyradar/bert-large-finetuned-phishing-webpage-version
null
[ "region:us" ]
null
2024-05-02T02:29:30+00:00
text-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
tsuneakikato/bert-base-japanese-v3-jnli
null
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:29:44+00:00
null
transformers
# Azazelle/L3-RP_io-Q6_K-GGUF This model was converted to GGUF format from [`Azazelle/L3-RP_io`](https://huggingface.co/Azazelle/L3-RP_io) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Azazelle/L3-RP_io) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo Azazelle/L3-RP_io-Q6_K-GGUF --model l3-rp_io.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo Azazelle/L3-RP_io-Q6_K-GGUF --model l3-rp_io.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m l3-rp_io.Q6_K.gguf -n 128 ```
{"license": "llama3", "library_name": "transformers", "tags": ["mergekit", "merge", "llama-cpp", "gguf-my-repo"], "base_model": ["ResplendentAI/Aura_Uncensored_l3_8B", "meta-llama/Meta-Llama-3-8B-Instruct", "ResplendentAI/Kei_Llama3_8B", "Undi95/Llama-3-Unholy-8B", "vicgalle/Roleplay-Llama-3-8B"]}
Azazelle/L3-RP_io-Q6_K-GGUF
null
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:ResplendentAI/Aura_Uncensored_l3_8B", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:ResplendentAI/Kei_Llama3_8B", "base_model:Undi95/Llama-3-Unholy-8B", "base_model:vicgalle/Roleplay-Llama-3-8B", "license:llama3", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:29:54+00:00
null
transformers
# Uploaded model - **Developed by:** hamiejuice - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"}
hamiejuice/annotated-1poc
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:30:16+00:00
audio-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilhubert-finetuned-gtzan This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 0.5364 - Accuracy: 0.81 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.9223 | 1.0 | 113 | 1.8358 | 0.49 | | 1.1658 | 2.0 | 226 | 1.2417 | 0.62 | | 0.9584 | 3.0 | 339 | 1.0307 | 0.71 | | 0.7216 | 4.0 | 452 | 0.8311 | 0.72 | | 0.4906 | 5.0 | 565 | 0.6832 | 0.83 | | 0.4631 | 6.0 | 678 | 0.5772 | 0.84 | | 0.2003 | 7.0 | 791 | 0.5641 | 0.84 | | 0.099 | 8.0 | 904 | 0.5368 | 0.83 | | 0.1449 | 9.0 | 1017 | 0.5270 | 0.82 | | 0.0776 | 10.0 | 1130 | 0.5364 | 0.81 | ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["marsyas/gtzan"], "metrics": ["accuracy"], "base_model": "ntu-spml/distilhubert", "model-index": [{"name": "distilhubert-finetuned-gtzan", "results": [{"task": {"type": "audio-classification", "name": "Audio Classification"}, "dataset": {"name": "GTZAN", "type": "marsyas/gtzan", "config": "all", "split": "train", "args": "all"}, "metrics": [{"type": "accuracy", "value": 0.81, "name": "Accuracy"}]}]}]}
Gunnika/distilhubert-finetuned-gtzan
null
[ "transformers", "tensorboard", "safetensors", "hubert", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "base_model:ntu-spml/distilhubert", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:31:49+00:00
automatic-speech-recognition
transformers
{}
darianovoselova/wav2vec2-timit
null
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:32:21+00:00
object-detection
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
NekoJojo/DETRFaceDetection
null
[ "transformers", "safetensors", "detr", "object-detection", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:32:32+00:00
null
null
{}
sm09-dev/adb
null
[ "region:us" ]
null
2024-05-02T02:33:06+00:00
text-classification
transformers
{"language": ["vi"], "license": "apache-2.0", "datasets": ["unicamp-dl/mmarco"], "pipeline_tag": "text-classification", "widget": [{"text": "UIT l\u00e0 g\u00ec. Tr\u01b0\u1eddng \u0110\u1ea1i h\u1ecdc C\u00f4ng ngh\u1ec7 Th\u00f4ng tin (ti\u1ebfng Anh: University of Information Technology \u2013 UIT) l\u00e0 m\u1ed9t trung t\u00e2m h\u00e0ng \u0111\u1ea7u v\u1ec1 nghi\u00ean c\u1ee9u khoa h\u1ecdc.", "output": [{"label": "LABEL_0", "score": 0.988}]}]}
itdainb/vietnamese-cross-encoder
null
[ "transformers", "onnx", "safetensors", "roberta", "text-classification", "vi", "dataset:unicamp-dl/mmarco", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:34:03+00:00
text-generation
transformers
# nbeerbower/flammen23-mistral-7B AWQ - Model creator: [nbeerbower](https://huggingface.co/nbeerbower) - Original model: [flammen23-mistral-7B](https://huggingface.co/nbeerbower/flammen23-mistral-7B) ![image/png](https://huggingface.co/nbeerbower/flammen13X-mistral-7B/resolve/main/flammen13x.png) ## How to use ### Install the necessary packages ```bash pip install --upgrade autoawq autoawq-kernels ``` ### Example Python code ```python from awq import AutoAWQForCausalLM from transformers import AutoTokenizer, TextStreamer model_path = "solidrust/flammen23-mistral-7B-AWQ" system_message = "You are flammen23-mistral-7B, incarnated as a powerful AI. You were created by nbeerbower." # Load model model = AutoAWQForCausalLM.from_quantized(model_path, fuse_layers=True) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) # Convert prompt to tokens prompt_template = """\ <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant""" prompt = "You're standing on the surface of the Earth. "\ "You walk one mile south, one mile west and one mile north. "\ "You end up exactly where you started. Where are you?" tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt), return_tensors='pt').input_ids.cuda() # Generate output generation_output = model.generate(tokens, streamer=streamer, max_new_tokens=512) ``` ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
{"license": "apache-2.0", "library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible"], "datasets": ["nbeerbower/bible-dpo"], "base_model": ["flammenai/flammen22X-mistral-7B"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"}
solidrust/flammen23-mistral-7B-AWQ
null
[ "transformers", "safetensors", "mistral", "text-generation", "4-bit", "AWQ", "autotrain_compatible", "endpoints_compatible", "dataset:nbeerbower/bible-dpo", "base_model:flammenai/flammen22X-mistral-7B", "license:apache-2.0", "text-generation-inference", "region:us" ]
null
2024-05-02T02:34:34+00:00
null
transformers
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/TeeZee/GALAXY-XB-v.03 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.Q2_K.gguf) | Q2_K | 6.0 | | | [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.IQ3_XS.gguf) | IQ3_XS | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.Q3_K_S.gguf) | Q3_K_S | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.IQ3_S.gguf) | IQ3_S | 7.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.IQ3_M.gguf) | IQ3_M | 7.3 | | | [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.Q3_K_M.gguf) | Q3_K_M | 7.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.Q3_K_L.gguf) | Q3_K_L | 8.5 | | | [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.IQ4_XS.gguf) | IQ4_XS | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.Q4_K_S.gguf) | Q4_K_S | 9.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.Q4_K_M.gguf) | Q4_K_M | 9.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.Q5_K_S.gguf) | Q5_K_S | 11.1 | | | [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.Q5_K_M.gguf) | Q5_K_M | 11.4 | | | [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.Q6_K.gguf) | Q6_K | 13.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.Q8_0.gguf) | Q8_0 | 17.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "base_model": "TeeZee/GALAXY-XB-v.03", "no_imatrix": "nan-abort", "quantized_by": "mradermacher"}
mradermacher/GALAXY-XB-v.03-GGUF
null
[ "transformers", "gguf", "en", "base_model:TeeZee/GALAXY-XB-v.03", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:36:00+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
lsr42/epic
null
[ "transformers", "safetensors", "EPIC", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:36:36+00:00
null
null
{"license": "llama3"}
tosaddler/Llama3-OpenBioLLM-70B-GGUF
null
[ "license:llama3", "region:us" ]
null
2024-05-02T02:37:12+00:00
unconditional-image-generation
diffusers
# Model Card for Unit 1 of the [Diffusion Models Class ๐Ÿงจ](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute ๐Ÿฆ‹. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('ericpotter/sd-class-butterflies-32') image = pipeline().images[0] image ```
{"license": "mit", "tags": ["pytorch", "diffusers", "unconditional-image-generation", "diffusion-models-class"]}
ericpotter/sd-class-butterflies-32
null
[ "diffusers", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
null
2024-05-02T02:37:14+00:00
null
null
{"license": "openrail"}
rieviee/Kendji
null
[ "license:openrail", "region:us" ]
null
2024-05-02T02:37:29+00:00
null
null
{"license": "mit"}
cieruinfile/ev
null
[ "license:mit", "region:us" ]
null
2024-05-02T02:37:41+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
cody628/t5-large_PREFIX_TUNING_SEQ2SEQ
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:37:50+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
vagarwal97/Mistral-7B-Instruct-FT-YTcomments
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:38:05+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # shawgpt-ft This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8743 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 4.5927 | 0.9231 | 3 | 3.9598 | | 4.0453 | 1.8462 | 6 | 3.4396 | | 3.4729 | 2.7692 | 9 | 2.9900 | | 2.2603 | 4.0 | 13 | 2.5591 | | 2.6738 | 4.9231 | 16 | 2.3232 | | 2.3752 | 5.8462 | 19 | 2.1372 | | 2.1629 | 6.7692 | 22 | 2.0113 | | 1.5507 | 8.0 | 26 | 1.9372 | | 1.9905 | 8.9231 | 29 | 1.8902 | | 1.3794 | 9.2308 | 30 | 1.8743 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.1.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "model-index": [{"name": "shawgpt-ft", "results": []}]}
vagarwal97/shawgpt-ft
null
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "license:apache-2.0", "region:us" ]
null
2024-05-02T02:38:07+00:00
text-generation
transformers
# Uploaded model - **Developed by:** Lawlux - **License:** apache-2.0 - **Finetuned from model :** meta-llama/Meta-Llama-3-8B-Instruct This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct"}
Lawlux/Lixo-Meta-Llama-3-8B-Instruct
null
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:38:39+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
lsr42/sparta
null
[ "transformers", "safetensors", "SPARTA", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:40:00+00:00
null
null
{"license": "openrail"}
modeldodel/MINYEON-MINJEONG
null
[ "license:openrail", "region:us" ]
null
2024-05-02T02:40:14+00:00
null
null
{"license": "mit"}
abdelazizfathy2021/doma
null
[ "license:mit", "region:us" ]
null
2024-05-02T02:41:01+00:00
null
null
{"license": "openrail"}
modeldodel/JIWON
null
[ "license:openrail", "region:us" ]
null
2024-05-02T02:41:12+00:00
null
null
{}
Isjamzz/Real
null
[ "region:us" ]
null
2024-05-02T02:43:46+00:00
text-generation
null
## Exllama v2 Quantizations of Hermes-2-Pro-Llama-3-8B Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.20">turboderp's ExLlamaV2 v0.0.20</a> for quantization. <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b> Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. Original model: https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B ## Prompt format ``` <|im_start|>system You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|> <|im_start|>user Hello, who are you?<|im_end|> <|im_start|>assistant ``` ## Available sizes | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (8K) | VRAM (16k) | VRAM (32k) | Description | | ----- | ---- | ------- | ------ | ------ | ------ | ------ | ------------ | | [8_0](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-exl2/tree/8_0) | 8.0 | 8.0 | 10.1 GB | 10.5 GB | 11.5 GB | 13.6 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. | | [6_5](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-exl2/tree/6_5) | 6.5 | 8.0 | 8.9 GB | 9.3 GB | 10.3 GB | 12.4 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. | | [5_0](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-exl2/tree/5_0) | 5.0 | 6.0 | 7.7 GB | 8.1 GB | 9.1 GB | 11.2 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. | | [4_25](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-exl2/tree/4_25) | 4.25 | 6.0 | 7.0 GB | 7.4 GB | 8.4 GB | 10.5 GB | GPTQ equivalent bits per weight, slightly higher quality. | | [3_5](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-exl2/tree/3_5) | 3.5 | 6.0 | 6.4 GB | 6.8 GB | 7.8 GB | 9.9 GB | Lower quality, only use if you have to. | ## Download instructions With git: ```shell git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-exl2 Hermes-2-Pro-Llama-3-8B-exl2-6_5 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download a specific branch, use the `--revision` parameter. For example, to download the 6.5 bpw branch: Linux: ```shell huggingface-cli download bartowski/Hermes-2-Pro-Llama-3-8B-exl2 --revision 6_5 --local-dir Hermes-2-Pro-Llama-3-8B-exl2-6_5 --local-dir-use-symlinks False ``` Windows (which apparently doesn't like _ in folders sometimes?): ```shell huggingface-cli download bartowski/Hermes-2-Pro-Llama-3-8B-exl2 --revision 6_5 --local-dir Hermes-2-Pro-Llama-3-8B-exl2-6.5 --local-dir-use-symlinks False ``` Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
{"language": ["en"], "license": "apache-2.0", "tags": ["Llama-3", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "axolotl"], "datasets": ["teknium/OpenHermes-2.5"], "base_model": "NousResearch/Meta-Llama-3-8B", "widget": [{"example_title": "Hermes 2 Pro", "messages": [{"role": "system", "content": "You are a sentient, superintelligent artificial general intelligence, here to teach and assist me."}, {"role": "user", "content": "Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world."}]}], "quantized_by": "bartowski", "pipeline_tag": "text-generation", "model-index": [{"name": "Hermes-2-Pro-Llama-3-8B", "results": []}]}
bartowski/Hermes-2-Pro-Llama-3-8B-exl2
null
[ "Llama-3", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "axolotl", "text-generation", "en", "dataset:teknium/OpenHermes-2.5", "base_model:NousResearch/Meta-Llama-3-8B", "license:apache-2.0", "region:us" ]
null
2024-05-02T02:44:45+00:00
null
null
{}
Xrunner/hive-o
null
[ "region:us" ]
null
2024-05-02T02:46:01+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
lsr42/splade
null
[ "transformers", "safetensors", "SPLADE", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:46:01+00:00
null
null
{}
eminAydin/gpt2Kaggle
null
[ "region:us" ]
null
2024-05-02T02:46:03+00:00
token-classification
transformers
{"license": "mit"}
Minwoo-Study/AIHUB_IndonesianCorpus_NER
null
[ "transformers", "pytorch", "bert", "token-classification", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:46:17+00:00
null
transformers
# Uploaded model - **Developed by:** Samoed - **License:** apache-2.0 - **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/Phi-3-mini-4k-instruct-bnb-4bit"}
Samoed/PRGen-Phi-3-mini-4k-instruct-4bit-LoRA
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:46:30+00:00
null
null
{}
ericpotter/sd-class-butterflies-64
null
[ "region:us" ]
null
2024-05-02T02:46:42+00:00
text-generation
transformers
# Llama3-TAIDE-LX-8B-Chat-Alpha1 - AWQ - Model creator: [taide](https://huggingface.co/taide) - Original model: [taide/Llama3-TAIDE-LX-8B-Chat-Alpha1](https://huggingface.co/taide/Llama3-TAIDE-LX-8B-Chat-Alpha1) <!-- description start --> ## Description This repo contains AWQ model files for [taide/Llama3-TAIDE-LX-8B-Chat-Alpha1](https://huggingface.co/taide/Llama3-TAIDE-LX-8B-Chat-Alpha1). ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - Llama and Mistral models only - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code ## Original model card --- * [English Version](https://huggingface.co/taide/Llama3-TAIDE-LX-8B-Chat-Alpha1/blob/main/README_en.md) # ๆจกๅž‹็ฐกไป‹ * [TAIDE่จˆ็•ซ](https://taide.tw/index)่‡ดๅŠ›ๆ–ผ้–‹็™ผ็ฌฆๅˆๅฐ็ฃ่ชž่จ€ๅ’Œๆ–‡ๅŒ–็‰นๆ€ง็š„็”Ÿๆˆๅผไบบๅทฅๆ™บๆ…งๅฐ่ฉฑๅผ•ๆ“Žๆจกๅž‹๏ผŒๅŒๆ™‚ๅปบๆง‹ๅฏไฟกไปป็š„ไบบๅทฅๆ™บๆ…ง็’ฐๅขƒใ€‚็ตๅˆ็”ขๅญธ็ ”่ƒฝ้‡๏ผŒๆŽจๅ‹•ๅฏไฟกไปป็”Ÿๆˆๅผไบบๅทฅๆ™บๆ…ง็š„็™ผๅฑ•๏ผŒๆๅ‡ๅฐ็ฃๅœจๅœ‹้š›็ซถ็ˆญไธญ็š„ๅœฐไฝ๏ผŒไฟƒ้€ฒ็”ขๆฅญ็™ผๅฑ•๏ผŒ้ฟๅ…ๅฐๅค–ๅœ‹ๆŠ€่ก“็š„ไพ่ณดใ€‚ * Llama3 TAIDE ็ณปๅˆ—ๆจกๅž‹ไปฅ Meta ๅ…ฌๅธ้‡‹ๅ‡บ็š„ [LLaMA3-8b](https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3/) ็‚บๅŸบ็คŽ๏ผŒๅฐŽๅ…ฅๅฐ็ฃไธๅŒ้ ˜ๅŸŸๅฏ็”จ็š„ๆ–‡ๆœฌ่ˆ‡่จ“็ทด็ด ๆ๏ผŒๆ้ซ˜ๆจกๅž‹ๅœจๆญฃ้ซ”ไธญๆ–‡ๅ›žๆ‡‰็š„่ƒฝๅŠ›่ˆ‡็‰นๅฎšไปปๅ‹™็š„่กจ็พใ€‚ๅ…ฌ้–‹้‡‹ๅ‡บ็š„ๆจกๅž‹ๅฆ‚ไธ‹: * [Llama3-TAIDE-LX-8B-Chat-Alpha1](https://huggingface.co/taide/Llama3-TAIDE-LX-8B-Chat-Alpha1): ไปฅ LLaMA3-8b ็‚บๅŸบ็คŽ๏ผŒไฝฟ็”จ็น้ซ”ไธญๆ–‡่ณ‡ๆ–™้ ่จ“็ทด (continuous pretraining)๏ผŒไธฆ้€้ŽๆŒ‡ไปคๅพฎ่ชฟ(instruction tuning)ๅผทๅŒ–่พฆๅ…ฌๅฎคๅธธ็”จไปปๅ‹™ๅ’Œๅคš่ผชๅ•็ญ”ๅฐ่ฉฑ่ƒฝๅŠ›๏ผŒ้ฉๅˆ่Šๅคฉๅฐ่ฉฑๆˆ–ไปปๅ‹™ๅ”ๅŠฉ็š„ไฝฟ็”จๆƒ…ๅขƒใ€‚Llama3-TAIDE-LX-8B-Chat-Alpha1 ๅฆๅค–ๆœ‰ๆไพ›[4 bit ้‡ๅŒ–ๆจกๅž‹](https://huggingface.co/taide/Llama3-TAIDE-LX-8B-Chat-Alpha1-4bit)๏ผŒ้‡ๅŒ–ๆจกๅž‹ไธป่ฆๆ˜ฏๆไพ›ไฝฟ็”จ่€…็š„ไพฟๅˆฉๆ€ง๏ผŒๅฏ่ƒฝๆœƒๅฝฑ้Ÿฟๆ•ˆ่ƒฝ่ˆ‡ๆ›ดๅคšไธๅฏ้ ๆœŸ็š„ๅ•้กŒ๏ผŒ้‚„่ซ‹ไฝฟ็”จ่€…็†่งฃ่ˆ‡ๆณจๆ„ใ€‚ # ๆจกๅž‹ๅƒๆ•ธ * ๅƒๆ•ธ้‡: 8B * ๆœ€ๅคงๅ…งๅฎน้•ทๅบฆ (context length): 8K * ็นไธญ่จ“็ทด่ณ‡ๆ–™ token ้‡: 43B * ่จ“็ทดๆ™‚้–“: 2336 H100 GPU Hours # ็‰น่‰ฒ * ๅšดๆ ผๆŠŠ้—œๆจกๅž‹็š„่จ“็ทด่ณ‡ๆ–™๏ผŒๆๅ‡ๆจกๅž‹็”Ÿๆˆ่ณ‡ๆ–™็š„ๅฏไฟกไปปๆ€งๅ’Œ้ฉ็”จๆ€ง * ้‡ๅฐ่‡ชๅ‹•ๆ‘˜่ฆใ€ๅฏซไฟกใ€ๅฏซๆ–‡็ซ ใ€ไธญ็ฟป่‹ฑใ€่‹ฑ็ฟปไธญ็ญ‰่พฆๅ…ฌๅฎคๅธธ็”จไปปๅ‹™ๅšๅŠ ๅผท * ้‡ๅฐๅฐ็ฃๅœจๅœฐๆ–‡ๅŒ–ใ€็”จ่ชžใ€ๅœ‹ๆƒ…็ญ‰็Ÿฅ่ญ˜ๅšๅŠ ๅผท * ๅ…ทๅ‚™ๅคš่ผชๅ•็ญ”ๅฐ่ฉฑ่ƒฝๅŠ› # ๆ‡‰็”จ็ฏ„ไพ‹ | ไปปๅ‹™ | ไฝฟ็”จ่€…่ผธๅ…ฅ | ๆจกๅž‹่ผธๅ‡บ | | --- | -------- | ------- | | ๅฏซๆ–‡็ซ  | ่ซ‹ไปฅไปฅไธ‹ๅ…งๅฎน็‚บๅŸบ็คŽ๏ผŒๅฏซไธ€็ฏ‡ๆ–‡็ซ ๏ผšๆ’ฐๅฏซไธ€็ฏ‡ไฝœๆ–‡๏ผŒ้กŒ็›ฎ็‚บใ€Šไธ€ๅผต่ˆŠ็…ง็‰‡ใ€‹๏ผŒๅ…งๅฎน่ฆๆฑ‚็‚บ๏ผš้ธๆ“‡ไธ€ๅผตไปคไฝ ๅฐ่ฑกๆทฑๅˆป็š„็…ง็‰‡๏ผŒ่ชชๆ˜Žไปคไฝ ๅฐ่ฑกๆทฑๅˆป็š„ๅŽŸๅ› ๏ผŒไธฆๆ่ฟฐ็…ง็‰‡ไธญ็š„ๅฝฑๅƒๅŠ่ƒŒๅพŒ็š„ๆ•…ไบ‹ใ€‚่จ˜้Œ„ๆˆ้•ท็š„้Ž็จ‹ใ€่ˆ‡ไป–ไบบ็š„ๆƒ…ๆ™ฏใ€็’ฐๅขƒ่ฎŠ้ทๅ’Œ็พŽ้บ—็š„ๆ™ฏ่‰ฒใ€‚ | ไธ€ๅผต่ˆŠ็…ง็‰‡๏ผŒๆ‰ฟ่ผ‰่‘—ๆ™‚ๅ…‰็š„็—•่ทกๅ’Œๅ›žๆ†ถ็š„ๆบซๅบฆใ€‚ๆˆ‘้‚„่จ˜ๅพ—้‚ฃๅผต็…ง็‰‡ๆ˜ฏๆ€Ž้บผๅ‡บ็พ็š„๏ผŒ้‚ฃๆ˜ฏไธ€ๅผตๆณ›้ปƒ็š„้ป‘็™ฝ็›ธ็‰‡๏ผŒ่ฃฑๅœจ็‰†ไธŠ๏ผŒ่จด่ชช่‘—ไธ€ๆฎตๅฑฌๆ–ผๆˆ‘ๅฎถๆ—็š„ๆ•…ไบ‹ใ€‚<br><br>้‚ฃๅผต็…ง็‰‡ไธญ๏ผŒๆˆ‘็š„็ฅ–็ˆถๆฏ็ซ™ๅœจ็”ฐ้‡Ž้–“๏ผŒๅพŒๆ–นๆ˜ฏ็ถฟๅปถ็š„ๅฑฑๅท’๏ผŒๅ‰ๆ–นๅ‰‡ๆ˜ฏไธ€็‰‡้‡‘้ปƒ็š„็จป็”ฐใ€‚้€™ๅน…ๆ™ฏ่ฑกไปคๆˆ‘ๅฐ่ฑกๆทฑๅˆป๏ผŒๅ› ็‚บ้€™ๆ˜ฏๆˆ‘็š„็ซฅๅนดๅ›žๆ†ถ๏ผŒไนŸๆ˜ฏๆˆ‘ๆˆ้•ท้Ž็จ‹ไธญๆœ€็พŽๅฅฝ็š„้ขจๆ™ฏ็ทšใ€‚<br><br>ๆˆ‘ๅ‡บ็”Ÿๅœจๅฐ็ฃ้„‰ไธ‹๏ผŒๅพžๅฐๅฐฑ็”Ÿๆดปๅœจ้€™ๆจฃไธ€ๅ€‹็ด”ๆจธ็š„็’ฐๅขƒไธญใ€‚ๆฏ็•ถๆˆ‘่ทŸ่‘—ๅฎถไบบๅœจ็”ฐ่ฃกๅทฅไฝœๆ™‚๏ผŒๅฐฑๆœƒ็œ‹ๅˆฐ้€™ๆจฃ็š„็พŽๆ™ฏใ€‚้‡‘้ปƒ็š„็จป็ฉ—้šจ่‘—ๅพฎ้ขจๆ–ๆ›ณ๏ผŒๅฑฑๅท’ๅ‰‡้œ้œๅœฐ็Ÿ—็ซ‹ๅœจ้ ๆ–น๏ผŒ็ตฆไบบไธ€็จฎๅฎ‰่ฉณ่ˆ‡็ฅฅๅ’Œ็š„ๆ„Ÿ่ฆบใ€‚<br><br>้‚ฃๅผต็…ง็‰‡ๆ‹ๆ”ๆ–ผๆฐ‘ๅœ‹50ๅนดไปฃ๏ผŒ็•ถๆ™‚ๅฐ็ฃ็ถ“ๆฟŸๅฐšๆœช่ตท้ฃ›๏ผŒ่พฒๆฅญ็‚บไธป่ฆ็”ขๆฅญใ€‚ๆˆ‘็š„็ฅ–็ˆถๆฏ้ƒฝๆ˜ฏ่พฒๆฐ‘๏ผŒไป–ๅ€‘่พ›ๅ‹ค่€•็จฎ๏ผŒ็‚บๆˆ‘ๅ€‘ๅฎถๆไพ›ๆบซ้ฃฝใ€‚ๆฏ็•ถๆˆ‘็œ‹ๅˆฐ้‚ฃๅผต็…ง็‰‡๏ผŒๅฐฑๆœƒๆƒณ่ตทไป–ๅ€‘็‚บๆˆ‘ๅ€‘ๆ‰€ๅš็š„็Šง็‰ฒๅ’ŒๅŠชๅŠ›ใ€‚<br><br>้™คไบ†็พŽ้บ—็š„้ขจๆ™ฏๅค–๏ผŒ็…ง็‰‡ไธญ้‚„ๆœ‰ไธ€ๆฎตๅ‹•ไบบ็š„ๆ•…ไบ‹ใ€‚ๅœจๆˆ‘้‚„ๅพˆๅฐ็š„ๆ™‚ๅ€™๏ผŒๅฎถ่ฃก็™ผ็”Ÿไบ†ไธ€ๅ ดๅคง็ซ๏ผŒๅฐ‡ๆˆ‘ๅ€‘็š„ๆˆฟๅญๅ’Œๆ‰€ๆœ‰่ฒก็”ข้ƒฝ็‡’ๅ…‰ไบ†ใ€‚ๅนธๅฅฝๅ…จๅฎถไบบ้ƒฝๅนณๅฎ‰็„กไบ‹๏ผŒไฝ†็”Ÿๆดปๅป่ฎŠๅพ—ๅๅˆ†ๅ›ฐ่‹ฆใ€‚ๆˆ‘ๅ€‘่ขซ่ฟซๆฌๅˆฐๅฆไธ€ๅ€‹ๆ‘่ŽŠ๏ผŒๅœจ้‚ฃ่ฃก้‡ๆ–ฐ้–‹ๅง‹็”Ÿๆดปใ€‚<br><br>้‚ฃๅผต็…ง็‰‡ไนŸ่จ˜้Œ„ไบ†้‚ฃๆฎต่‰ฑ่พ›็š„ๆญฒๆœˆใ€‚้›–็„ถๆˆ‘ๅ€‘ๅคฑๅŽปไบ†่จฑๅคšๆฑ่ฅฟ๏ผŒไฝ†ๅฎถไบบไน‹้–“็š„ๆ„Ÿๆƒ…ๅปๆ›ดๅŠ ๆทฑๅŽšใ€‚ๆˆ‘ๅ€‘ไบ’็›ธๆ‰ถๆŒ๏ผŒๅ…ฑๅŒ้ขๅฐ็”ŸๆดปไธŠ็š„็จฎ็จฎๆŒ‘ๆˆฐใ€‚<br><br>ๅฆ‚ไปŠ๏ผŒๆˆ‘ๅทฒ้•ทๅคงๆˆไบบ๏ผŒๅœจ้ƒฝๅธ‚่ฃกๅทฅไฝœใ€‚ไฝ†ๆฏ็•ถๆˆ‘็œ‹ๅˆฐ้‚ฃๅผต่ˆŠ็…ง็‰‡๏ผŒๆˆ‘้ƒฝๆœƒๆƒณ่ตท้‚ฃๆฎต็ฐกๅ–ฎ่€Œ็พŽๅฅฝ็š„ๆ™‚ๅ…‰ใ€‚้‚ฃไบ›็พŽ้บ—็š„้ขจๆ™ฏใ€่ฆชไบบ็š„็ฌ‘ๅฎน๏ผŒไปฅๅŠ้‚ฃๆฎตๅ……ๆปฟๆŒ‘ๆˆฐๅปๅˆๅ……ๆปฟๅธŒๆœ›็š„ๆ—ฅๅญ๏ผŒ้ƒฝๅฐ‡ๆฐธ้ ้ซๅˆปๅœจๆˆ‘็š„่จ˜ๆ†ถๆทฑ่™•ใ€‚<br><br>้€™ๅผต่ˆŠ็…ง็‰‡๏ผŒไธๅƒ…ๅƒ…ๆ˜ฏไธ€ๅผต็…ง็‰‡๏ผŒๆ›ดๆ˜ฏไธ€ๅ€‹ๅฏถ่ฒด็š„ๅฎถๆ—ๅฒ๏ผŒไธ€ๅ€‹้—œๆ–ผๆˆ‘ๅ€‘ๅฆ‚ไฝ•ๅ…‹ๆœๅ›ฐ้›ฃใ€็ๆƒœ็พๅœจ็š„ๆ•…ไบ‹ใ€‚ๆˆ‘ๅฐ‡ๆŠŠ้€™ๅผต็…ง็‰‡ๅฅฝๅฅฝไฟๅญ˜ไธ‹ไพ†๏ผŒ่ฎ“ๅพŒไปฃไนŸ่ƒฝๅค ่ช่ญ˜ๆˆ‘ๅ€‘็š„ๆ นๆบ๏ผŒไธฆไธ”้Š˜่จ˜้‚ฃไบ›ๅ€ผๅพ—็่—็š„็พŽๅฅฝๅ›žๆ†ถใ€‚ | | ๅฏซไฟก | ่ซ‹ไปฅๅ“็‰Œ็ถ“็†็š„่บซไปฝ๏ผŒ็ตฆๅปฃๅ‘Šๅ…ฌๅธ็š„ๅ‰ตๆ„็ธฝ็›ฃๅฏซไธ€ๅฐไฟก๏ผŒๆๅ‡บๅฐๆ–ผๆ–ฐ็”ขๅ“ๅปฃๅ‘Šๅฎฃๅ‚ณๆดปๅ‹•็š„ๅ‰ตๆ„ๅปบ่ญฐใ€‚ | [ๆ‚จ็š„ๅง“ๅ]<br>[ๆ‚จ็š„่ท็จฑ]<br>[ๆ‚จ็š„ๅ…ฌๅธๅ็จฑ]<br>[้ƒตไปถๅœฐๅ€]<br>[้›ป่ฉฑ่™Ÿ็ขผ]<br>[็™ผๆ–‡ๆ—ฅๆœŸ]<br><br>[ๅ‰ตๆ„็ธฝ็›ฃๅง“ๅ]<br>[ๅปฃๅ‘Šๅ…ฌๅธๅ็จฑ]<br>[ๅปฃๅ‘Šๅ…ฌๅธๅœฐๅ€]<br><br>ไธปๆ—จ:้—œๆ–ผๆ–ฐ็”ขๅ“ๅฎฃๅ‚ณๆดปๅ‹•ๅ‰ตๆ„็™ผๆƒณ๏ผŒ่ซ‹ๆŸฅ็…งใ€‚<br><br>ๅฐŠๆ•ฌ็š„[ๅ‰ตๆ„็ธฝ็›ฃๅง“ๅ]๏ผŒ<br><br>ๆˆ‘ๅธŒๆœ›้€™ๅฐไฟก่ƒฝๆ‰พๅˆฐไฝ ๅฟ™็ขŒไธญๅฅฝๅฅฝไผ‘ๆฏใ€‚ๆˆ‘ๅฏซ้€™ๅฐไฟก๏ผŒๆ˜ฏ็‚บไบ†ๅˆ†ไบซๆˆ‘ๅฐๆ–ผๆˆ‘ๅ€‘ๅณๅฐ‡ๆŽจๅ‡บ็š„ๆ–ฐ็”ขๅ“ๅฎฃๅ‚ณๆดปๅ‹•ไธ€ไบ›ๅ‰ตๆ„็™ผๆƒณใ€‚ๆˆ‘ๆทฑไฟก๏ผŒ้€™ไบ›ๆƒณๆณ•่ƒฝๅนซๅŠฉๆˆ‘ๅ€‘ๆ–ฐ็”ขๅ“ๅœจๅธ‚ๅ ดไธŠ่„ซ็ฉŽ่€Œๅ‡บ๏ผŒๅธๅผ•ๆถˆ่ฒป่€…็š„็›ฎๅ…‰ใ€‚<br><br>้ฆ–ๅ…ˆ๏ผŒๆˆ‘ๅปบ่ญฐๆˆ‘ๅ€‘ๅฏไปฅๅˆฉ็”จ็คพ็พคๅช’้ซ”ๅนณๅฐ๏ผŒๅƒๆ˜ฏFacebookใ€Instagramใ€YouTube็ญ‰๏ผŒ่ฃฝไฝœไธ€็ณปๅˆ—ๅผ•ไบบๅ…ฅๅ‹็š„็Ÿญ็‰‡ๅ’Œ่ฒผๆ–‡ใ€‚้€™ไบ›ๅ…งๅฎนๅฏไปฅๅฑ•็พๆ–ฐ็”ขๅ“็š„็‰น่‰ฒๅ’Œๅ„ช้ปž๏ผŒๅŒๆ™‚ไนŸ่ƒฝๅ‚ณ้”ๅ‡บๆˆ‘ๅ€‘ๅ“็‰Œ้‡่ฆ–็’ฐไฟใ€ๅฅๅบท็š„ๆ ธๅฟƒๅƒนๅ€ผใ€‚ๆˆ‘ๅ€‘ๅฏไปฅ่ˆ‡็ถฒ็ด…ๆˆ–ๆ„่ฆ‹้ ˜่ข–ๅˆไฝœ๏ผŒ่ฎ“ไป–ๅ€‘่ฉฆ็”จไธฆๅˆ†ไบซไป–ๅ€‘็š„ไฝฟ็”จ็ถ“้ฉ—๏ผŒ้€™ๅฐ‡ๆœ‰ๅŠฉๆ–ผๅขžๅŠ ๆˆ‘ๅ€‘็”ขๅ“็š„ๅฏไฟกๅบฆๅ’Œ่ƒฝ่ฆ‹ๅบฆใ€‚<br><br>ๅ…ถๆฌก๏ผŒๆˆ‘ๅ€‘ๅฏไปฅ่ˆ‰่พฆๆ–ฐ็”ขๅ“่ฉฆๅƒ/่ฉฆ็”จๆดปๅ‹•๏ผŒ้‚€่ซ‹ๆฐ‘็œพ่ฆช่บซ้ซ”้ฉ—ๆˆ‘ๅ€‘็š„ๆ–ฐ็”ขๅ“ใ€‚ๆˆ‘ๅ€‘ๅฏไปฅๅœจๅ„ๅคง็™พ่ฒจๅ…ฌๅธใ€่ถ…ๅธ‚ใ€ไปฅๅŠๆˆ‘ๅ€‘็š„้–€ๅธ‚่ˆ‰่พฆ้€™ไบ›ๆดปๅ‹•๏ผŒ่ฎ“ๅƒ่ˆ‡่€…ๅ…่ฒป่ฉฆๅƒ/่ฉฆ็”จๆ–ฐ็”ขๅ“๏ผŒไธฆๅกซๅฏซๅ•ๅทๅ›ž้ฅ‹ๆ„่ฆ‹ใ€‚้€™็จฎไบ’ๅ‹•็š„ๆ–นๅผ๏ผŒไธๅƒ…่ƒฝๆ”ถ้›†ๅˆฐๅฏถ่ฒด็š„ๅธ‚ๅ ดๅ้ฅ‹๏ผŒ้‚„่ƒฝ่ฎ“ๆฐ‘็œพๅฐๆˆ‘ๅ€‘็š„ๅ“็‰Œ็”ข็”Ÿๆญฃ้ข็š„ๅฐ่ฑกใ€‚<br><br>็ฌฌไธ‰๏ผŒๆˆ‘ๅ€‘ๅฏไปฅ่ˆ‡็•ถๅœฐ้ค้ฃฒๆฅญ่€…ๅˆไฝœ๏ผŒๅฐ‡ๆˆ‘ๅ€‘็š„ๆ–ฐ็”ขๅ“็ดๅ…ฅไป–ๅ€‘็š„่œๅ–ฎไธญใ€‚้€™ๅฐ‡ไฝฟๆˆ‘ๅ€‘็š„็”ขๅ“ๆŽฅ่งธๅˆฐๆ›ดๅปฃๆณ›็š„ๅฎข็พค๏ผŒๅŒๆ™‚ไนŸ่ƒฝ่ญ‰ๆ˜Žๆˆ‘ๅ€‘็š„็”ขๅ“ๅœจ็ƒน้ฃชๅ’Œ็พŽ้ฃŸ็”ขๆฅญไธญๅ…ทๆœ‰ๅƒนๅ€ผใ€‚<br><br>็ฌฌๅ››๏ผŒๆˆ‘ๅ€‘ๅฏไปฅ่ˆ‰่พฆไธ€ๅ ดๆ–ฐ็”ขๅ“็™ผ่กจๆœƒ๏ผŒ้‚€่ซ‹ๅช’้ซ”ใ€ๆฅญ็•Œไบบๅฃซใ€ไปฅๅŠ้‡่ฆๅฎขๆˆถๅƒๅŠ ใ€‚ๅœจ้€™ๅ ดๆดปๅ‹•ไธญ๏ผŒๆˆ‘ๅ€‘ๅฏไปฅๅฑ•็คบๆ–ฐ็”ขๅ“็š„ๅ„้ …็‰น่‰ฒๅ’Œๅ„ชๅ‹ข๏ผŒไธฆ็”ฑๅฐˆๆฅญๅปšๅธซ็คบ็ฏ„ๅฆ‚ไฝ•ๅฐ‡ๆ–ฐ็”ขๅ“่žๅ…ฅๅ„ๅผๆ–™็†ไธญใ€‚้€™ๅฐ‡ๆ˜ฏๅ‘ๅคง็œพๅฑ•็คบๆˆ‘ๅ€‘ๅ“็‰Œๅ‰ตๆ–ฐ็ฒพ็ฅžๅ’Œๅฐˆๆฅญ่ƒฝๅŠ›็š„็ต•ไฝณๆฉŸๆœƒใ€‚<br><br>ๆœ€ๅพŒ๏ผŒๆˆ‘ๅปบ่ญฐๆˆ‘ๅ€‘ๅฏไปฅ่ฃฝไฝœไธ€็ณปๅˆ—็ฒพ็พŽ็š„ๅŒ…่ฃๅ’Œ็ฆฎ็›’๏ผŒไปฅๅธๅผ•ๆถˆ่ฒป่€…ๅœจ็ฏ€ๆ…ถๅ’Œ้€็ฆฎๆ™‚้ธๆ“‡ๆˆ‘ๅ€‘็š„็”ขๅ“ใ€‚็ฒพ็ทป็š„ๅŒ…่ฃๅฐ‡่ƒฝๆๅ‡ๆˆ‘ๅ€‘็”ขๅ“็š„ๅƒนๅ€ผๆ„Ÿ๏ผŒๅŒๆ™‚ไนŸ่ƒฝๅ‚ณ้”ๅ‡บๆˆ‘ๅ€‘ๅ“็‰Œๅฐ็ดฐ็ฏ€ๅ’Œๅ“่ณช็š„ๅ …ๆŒใ€‚<br><br>ๆˆ‘ๆทฑไฟก๏ผŒ้€™ไบ›ๅ‰ตๆ„็™ผๆƒณๅฐ‡ๆœ‰ๅŠฉๆ–ผๆˆ‘ๅ€‘ๆ–ฐ็”ขๅ“ๅœจๅธ‚ๅ ดไธŠ็ฒๅพ—ๆˆๅŠŸใ€‚่‹ฅๆ‚จๅฐ้€™ไบ›ๆƒณๆณ•ๆœ‰ไปปไฝ•ๆ„่ฆ‹ๆˆ–ๅปบ่ญฐ๏ผŒ่ซ‹ไธๅ่ˆ‡ๆˆ‘ๅˆ†ไบซใ€‚ๆˆ‘ๆœŸๅพ…่‘—่ˆ‡ๆ‚จๅˆไฝœ๏ผŒๅ…ฑๅŒ็‚บ[ๆ‚จ็š„ๅ…ฌๅธๅ็จฑ]็š„ๆˆ้•ทๅ’ŒๆˆๅŠŸ่€ŒๅŠชๅŠ›ใ€‚<br><br>่ฌ่ฌๆ‚จๆŠฝ็ฉบ้–ฑ่ฎ€ๆˆ‘็š„ไฟก๏ผŒไธฆๆœŸๅพ…่‘—่ˆ‡ๆ‚จ้€ฒไธ€ๆญฅ่จŽ่ซ–้€™ไบ›ๆƒณๆณ•ใ€‚<br><br>ๆ•ฌ็ฆฎ<br><br>[ๆ‚จ็š„ๅง“ๅ]<br>[ๆ‚จ็š„่ท็จฑ]<br>[ๆ‚จ็š„ๅ…ฌๅธๅ็จฑ] | | ๆ‘˜่ฆ | ่ซ‹ๅฐ‡้€™็ฏ‡ๆ–‡็ซ ็ฒพ็ฐกๆข็†ๅŒ–:ใ€Œ็”ขๆฅญๅ‰ตๆ–ฐๆขไพ‹็ฌฌ10ๆขไน‹2ๅŠ็ฌฌ72ๆขๆขๆ–‡ไฟฎๆญฃๆกˆใ€ไฟ—็จฑใ€Œๅฐ็‰ˆๆ™ถ็‰‡ๆณ•ใ€,้‡ๅฐๅŠๅฐŽ้ซ”ใ€้›ปๅ‹•่ปŠใ€5G็ญ‰ๆŠ€่ก“ๅ‰ตๆ–ฐไธ”ๅฑ…ๅœ‹้š›ไพ›ๆ‡‰้ˆ้—œ้ตๅœฐไฝๅ…ฌๅธ,ๆไพ›ๆœ€้ซ˜25%็‡Ÿๆ‰€็จ…ๆŠ•ๆŠตๅ„ชๆƒ ,ไผๆฅญ้ฉ็”จ่ฆไปถๅŒ…ๅซ็•ถๅนดๅบฆ็ ”็™ผ่ฒป็”จใ€็ ”็™ผๅฏ†ๅบฆ้”ไธ€ๅฎš่ฆๆจก,ไธ”ๆœ‰ๆ•ˆ็จ…็އ้”ไธ€ๅฎšๆฏ”็އใ€‚<br>็‚บๅ› ๆ‡‰็ถ“ๆฟŸๅˆไฝœๆšจ็™ผๅฑ•็ต„็น”(OECD)ๅœ‹ๅฎถๆœ€ไฝŽ็จ…่ฒ ๅˆถ่ชฟๆ•ด,ๅ…ถไธญๆœ‰ๆ•ˆ็จ…็އ้–€ๆชป,ๆฐ‘ๅœ‹112ๅนด่จ‚็‚บ12%,113ๅนดๆ–™ๅฐ‡ๆ้ซ˜่‡ณ15%,ไฝ†ไปๅพ—ๅฏฉ้…Œๅœ‹้š›้–“ๆœ€ไฝŽ็จ…่ฒ ๅˆถๅฏฆๆ–ฝๆƒ…ๅฝขใ€‚<br>็ถ“ๆฟŸ้ƒจๅฎ˜ๅ“ก่กจ็คบ,ๅทฒๅ’Œ่ฒกๆ”ฟ้ƒจๅ”ๅ•†้€ฒๅ…ฅๆœ€ๅพŒ้šŽๆฎต,้™คไผๆฅญ็ ”็™ผๅฏ†ๅบฆ่จ‚ๅœจ6%,็›ฎๅ‰ๅทฒ็ขบ่ช,ไผๆฅญ่ณผ็ฝฎๅ…ˆ้€ฒ่ฃฝ็จ‹็š„่จญๅ‚™ๆŠ•่ณ‡้‡‘้ก้”100ๅ„„ๅ…ƒไปฅไธŠๅฏๆŠตๆธ›ใ€‚<br>่ฒกๆ”ฟ้ƒจๅฎ˜ๅ“ก่กจ็คบ,็ ”ๅ•†้Ž็จ‹ไธญ,้‡ๅฐๅฐ็ฃ็”ขๆฅญ่ˆ‡ๅ…ถๅœจๅœ‹้š›้–“้กžไผผ็š„ๅ…ฌๅธ้€ฒ่กŒๆทฑๅ…ฅ็ ”็ฉถ,ๅœจ่จญๅ‚™้ƒจๅˆ†,็•ข็ซŸ้ฉ็”จ็”ขๅ‰ต10ไน‹2็š„ๆฅญ่€…ๆ˜ฏไปฃ่กจๅฐ็ฃ้šŠๆ‰“ใ€Œๅœ‹้š›็›ƒใ€,ๆŠ•ๅ…ฅ้‡‘้กไธ้”100ๅ„„ๅ…ƒ,ๅฏ่ƒฝไนŸๆ‰“ไธไบ†ใ€‚<br>่‡ณๆ–ผๅ‚™ๅ—้—œๆณจ็š„็ ”็™ผ่ฒป็”จ้–€ๆชป,็ถ“ๆฟŸ้ƒจๅฎ˜ๅ“ก่กจ็คบ,ๆญท็ถ“่ˆ‡่ฒกๆ”ฟ้ƒจไพ†ๅ›žๅฏ†ๅˆ‡่จŽ่ซ–,็ ”็™ผ่ฒป็”จ้–€ๆชปๆœ‰ๆœ›่ฝๅœจ60ๅ„„่‡ณ70ๅ„„ๅ…ƒไน‹้–“ใ€‚<br>่ฒกๆ”ฟ้ƒจๅฎ˜ๅ“กๆŒ‡ๅ‡บ,็ ”็™ผๆ”ธ้—œๅฐ็ฃๆœชไพ†็ถ“ๆฟŸๆˆ้•ทๅ‹•่ƒฝ,้–€ๆชปไธ่ƒฝใ€Œ้ซ˜ไธๅฏๆ”€ใ€,่ตทๅˆ้›–่จญๅฎšๅœจ100ๅ„„ๅ…ƒ,ไน‹ๆ‰€ไปฅๆœƒ่ชฟ้™,ๆญฃๆ˜ฏ็›ผ่ฎ“ไผๆฅญ่ฆบๅพ—ๆœ‰่พฆๆณ•้”ๅพ—ๅˆฐ้–€ๆชปใ€้€ฒ่€Œ้ฉ็”จ็งŸ็จ…ๅ„ชๆƒ ,ๆ‰ๆœ‰ๅ‹•ๅŠ›็นผ็บŒๆŠ•ๅ…ฅ็ ”็™ผ,็ถญๆŒๅœ‹้š›ไพ›ๆ‡‰้ˆ้—œ้ตๅœฐไฝใ€‚<br>็ถ“ๆฟŸ้ƒจๅฎ˜ๅ“ก่กจ็คบ,ๅ› ๅป ๅ•†็ ”็™ผ่ฒป็”จๅนณๅ‡็‚บ30ใ€40ๅ„„ๅ…ƒ,ๅ…ถไธญ,IC่จญ่จˆๆฅญ่€…ไป‹ๆ–ผ30ๅ„„่‡ณ60ๅ„„ๅ…ƒ็ฏ„ๅœ,่‹ฅๅฐ‡้–€ๆชป่จ‚ๅœจ100ๅ„„ๅ…ƒ,็ฌฆๅˆๆขไปถ็š„ๆฅญ่€…่ผƒๅฐ‘ใ€ๅˆบๆฟ€่ช˜ๅ› ไธ่ถณ;ๆญคๅค–,่‹ฅ็ฌฆๅˆ็”ณ่ซ‹้–€ๆชป็š„ๆฅญ่€…ๅขžๅŠ ,ๅฐ‡ๅฏๆ้ซ˜ไผๆฅญๅœจๅฐๆŠ•่ณ‡้‡‘้ก,่ฒกๆ”ฟ้ƒจ็จ…ๆ”ถไนŸ่ƒฝๅ› ๆญค็ฒๅพ—ๆŒนๆณจใ€‚<br>IC่จญ่จˆๆฅญ่€…่ฟ‘ๆ—ฅ้ ป้ ป้‡ๅฐ็”ขๅ‰ต10ไน‹2็™ผ่ฒ,ๅธŒๆœ›้™ไฝŽ้ฉ็”จ้–€ๆชป,ๅŠ ไธŠๅ„ๅœ‹ๅŠ›ๆ‹šไพ›ๆ‡‰้ˆ่‡ชไธปๅŒ–ใ€ๅŠ ็ขผ่ฃœๅŠฉๅŠๅฐŽ้ซ”็”ขๆฅญ,็ถ“ๆฟŸ้ƒจๅฎ˜ๅ“ก่กจ็คบ,็ถ“ๆฟŸ้ƒจๅ’Œ่ฒกๆ”ฟ้ƒจๅฐฑ็”ขๅ‰ต10ไน‹2้”ๆˆๅ…ฑ่ญ˜,็ˆญๅ–่ฎ“ๆ›ดๅคšๆฅญ่€…ๅ—ๆƒ ,็›ผๅขžๅผทไผๆฅญๆŠ•่ณ‡ๅŠ›้“ๅŠ้žๅ›บๅฐ็ฃๆŠ€่ก“ๅœฐไฝใ€‚<br>่ฒกๆ”ฟ้ƒจๅฎ˜ๅ“ก่กจ็คบ,็งŸ็จ…็Žๅ‹ต็š„ๅˆถๅฎšๅฟ…้ ˆใ€Œๆœ‰็‚บๆœ‰ๅฎˆใ€,ไธฆไปฅ้”ๅˆฐ็Žๅ‹ต่จญ็ฝฎ็›ฎ็š„็‚บๆœ€้ซ˜ๅŽŸๅ‰‡,็พ้šŽๆฎตๅœจๆ‰“ใ€Œๅœ‹ๅ…ง็›ƒใ€็š„ไผๆฅญไปๅฏ้ฉ็”จ็”ขๅ‰ต็ฌฌ10ๆขใ€10ไน‹1็š„็งŸ็จ…ๅ„ชๆƒ ,ๅ…ฑๅŒๅฃฏๅคงๅฐ็ฃ็ถ“ๆฟŸ็™ผๅฑ•ใ€‚<br>็ถ“ๆฟŸ้ƒจๅ’Œ่ฒกๆ”ฟ้ƒจๆญฃๅฐฑ็ ”็™ผ่ฒป็”จ้–€ๆชปๅšๆœ€ๅพŒ็ขบ่ช,ๅพ…ไปŠๆ˜Žๅ…ฉๅคฉ้ ๅ‘Šๅญๆณ•ไน‹ๅพŒ,็ด„ๆœ‰30ๅคฉๆ™‚้–“,ๅฏ่ˆ‡ๆฅญ็•Œ้€ฒไธ€ๆญฅ่จŽ่ซ–ๅŠ่ชฟๆ•ด,็›ผ็”ขๅ‰ต10ไน‹2่ƒฝๅœจ6ๆœˆไธŠ่ทฏใ€‚ | ๆ–‡็ซ ้‡้ปžๆ‘˜่ฆๅฆ‚ไธ‹:<br><br>- ใ€Œๅฐ็‰ˆๆ™ถ็‰‡ๆณ•ใ€ๆˆ–ใ€Š็”ขๆฅญๅ‰ตๆ–ฐๆขไพ‹็ฌฌ10ๆขไน‹2ๅŠ็ฌฌ72ๆขๆขๆ–‡ไฟฎๆญฃๆกˆใ€‹๏ผŒๆ—จๅœจ้ผ“ๅ‹ต้—œ้ตๆŠ€่ก“ๅ‰ตๆ–ฐไผๆฅญ๏ผŒ็‰นๅˆฅๆ˜ฏ้‚ฃไบ›ๅœจๅŠๅฐŽ้ซ”ใ€้›ปๅ‹•่ปŠใ€5GๆŠ€่ก“็ญ‰้ ˜ๅŸŸไธ”ๅœจๅœ‹้š›ไพ›ๆ‡‰้ˆไธญๆ‰ฎๆผ”้—œ้ต่ง’่‰ฒ่€…ใ€‚<br><br>- ไผๆฅญ่‹ฅ็ฌฆๅˆ็‰นๅฎšๆขไปถ๏ผŒๅŒ…ๆ‹ฌ็•ถๅนดๅบฆ็ ”็™ผ่ฒป็”จๅŠ็ ”็™ผๅฏ†ๅบฆ้”ๆจ™๏ผŒๅฏไบซๆœ€้ซ˜25%็š„็‡Ÿๆ‰€็จ…ๆŠตๆธ›ๅ„ชๆƒ ใ€‚็ ”็™ผๅฏ†ๅบฆ็›ฎๆจ™่จญ็‚บ6%๏ผŒ่€Œไผๆฅญ็‚บๅ–ๅพ—ๆŠตๆธ›๏ผŒๅฟ…้ ˆ่ญ‰ๆ˜Žๅ…ถๆœ‰ๆ•ˆ็จ…็އ้”ๅˆฐ็‰นๅฎšๆฏ”็އ๏ผŒ่ตทๅˆ่จญๅฎš็‚บ112ๅนด็š„12%๏ผŒ113ๅนดๆ้ซ˜่‡ณ15%๏ผŒไฝ†ๅฏ่ƒฝๆœƒไพๅœ‹้š›็จ…ๅˆถ็š„ๅฏฆๆ–ฝๆƒ…ๅฝข้€ฒ่กŒ่ชฟๆ•ดใ€‚<br><br>- ๆ”ฟๅบœๅœจๅˆถๅฎšๆ”ฟ็ญ–ๆ™‚๏ผŒ่€ƒ้‡ๅˆฐOECDๆœ€ไฝŽ็จ…่ฒ ๅˆถ็š„่ชฟๆ•ด๏ผŒๅ› ๆญคๆœƒ่ฆ–ๆƒ…ๆณ่ชฟๆ•ดๆœ‰ๆ•ˆ็จ…็އ้–€ๆชปใ€‚<br><br>- ๆ”ฟๅบœ้ƒจ้–€ๅทฒๅฐฑๆณ•ๆกˆๅ…งๅฎน่ˆ‡ไผๆฅญๅˆฉ็›Š็›ธ้—œ่€…้€ฒ่กŒๅ”ๅ•†๏ผŒ็ขบๅฎšไธ€ไบ›็ดฐ็ฏ€๏ผŒไพ‹ๅฆ‚ไผๆฅญๆŠ•่ณ‡ๅ…ˆ้€ฒ่ฃฝ็จ‹่จญๅ‚™็š„้–€ๆชป่จญ็‚บ100ๅ„„ๅ…ƒๆ–ฐๅฐๅนฃไปฅไพ›ๆŠตๆธ›ใ€‚<br><br>- ็ ”็™ผ่ฒป็”จ้–€ๆชป็ถ“้Ž่จŽ่ซ–๏ผŒๅฏ่ƒฝๅพžๆœ€ๅˆๆ่ญฐ็š„100ๅ„„ๅ…ƒไธ‹่ชฟ่‡ณ60ๅ„„่‡ณ70ๅ„„ๅ…ƒไน‹้–“๏ผŒไปฅๆฟ€ๅ‹ตๆ›ดๅคšไผๆฅญ็ฌฆๅˆ่ณ‡ๆ ผไธฆๆŠ•ๅ…ฅ็ ”็™ผ๏ผŒ็ถญๆŒๅฐ็ฃๅœจๅœ‹้š›ไพ›ๆ‡‰้ˆไธญ็š„้—œ้ตๅœฐไฝใ€‚<br><br>- ็ถ“ๆฟŸ้ƒจๅฎ˜ๅ“ก่กจ็คบ๏ผŒ่ชฟ้™้–€ๆชป็š„็›ฎ็š„ๆ˜ฏ่ฎ“ๆ›ดๅคšไผๆฅญ่ฆบๅพ—่ƒฝๅค ้”ๆˆๆจ™ๆบ–๏ผŒๅพž่€Œ็ฒๅพ—ๆ‡‰็”จ็งŸ็จ…ๅ„ชๆƒ ็š„ๅ‹•ๅŠ›๏ผŒไธฆๅฏ่ƒฝ่—‰ๆญคๆ้ซ˜ๅœจๅฐๆŠ•่ณ‡้‡‘้ก๏ผŒ้€ฒ่€ŒๅขžๅŠ ๆ”ฟๅบœ็จ…ๆ”ถใ€‚<br><br>- ็”ฑๆ–ผIC่จญ่จˆๆฅญ่€…็š„็ ”็™ผ่ฒป็”จๆ™ฎ้่ผƒไฝŽ๏ผŒ่‹ฅ้–€ๆชป่จญๅพ—้Ž้ซ˜๏ผŒ็ฌฆๅˆๆขไปถ็š„ไผๆฅญๅฐ‡ๅพˆๅฐ‘๏ผŒๅ› ๆญคๆœ‰ๅฟ…่ฆ่ชฟๆ•ดใ€‚<br><br>- ๅœจๅ„ๅœ‹ๆŽจๅ‹•ไพ›ๆ‡‰้ˆ่‡ชไธปๅŒ–ไธฆๅขžๅŠ ๅฐๅŠๅฐŽ้ซ”็”ขๆฅญ็š„่ฃœๅŠฉไน‹้š›๏ผŒ็ถ“ๆฟŸ้ƒจๆญฃๅŠชๅŠ›็ขบไฟๆ›ดๅคšไผๆฅญ่ƒฝๅพžใ€Š็”ขๆฅญๅ‰ตๆ–ฐๆขไพ‹ใ€‹็ฌฌ10ๆขไน‹2ไธญๅ—็›Š๏ผŒไปฅๅขžๅผทๆŠ•่ณ‡ๅ’Œ้žๅ›บๅฐ็ฃ็š„ๆŠ€่ก“ๅœฐไฝใ€‚<br><br>- ๆ”ฟๅบœๅผท่ชฟ็งŸ็จ…ๅ„ชๆƒ ็š„ๅˆถๅฎšๆ‡‰็ฌฆๅˆ็Žๅ‹ต็›ฎๆจ™๏ผŒๅœจๆญคๅŒๆ™‚๏ผŒไนŸๅฐ‡็นผ็บŒๆไพ›ๅ…ถไป–็งŸ็จ…ๅ„ชๆƒ ็ตฆๅœจๅœ‹ๅ…ง็ถ“็‡Ÿ็š„ไผๆฅญ๏ผŒไปฅไฟƒ้€ฒๅฐ็ฃ็ถ“ๆฟŸๆˆ้•ทใ€‚<br><br>- ๆœ€็ต‚็š„็ ”็™ผ่ฒป็”จ้–€ๆชปๅฐ‡็”ฑ็ถ“ๆฟŸ้ƒจๅ’Œ่ฒกๆ”ฟ้ƒจๅ…ฑๅŒ็ขบๅฎš๏ผŒๅœจ้ ๅ‘Š็›ธ้—œๅญๆณ•ๅพŒ๏ผŒๅฐ‡ๆœ‰็ด„30ๅคฉ็š„ๆ™‚้–“ไพ›ๅ„็•Œ่จŽ่ซ–ไธฆๆๅ‡บ่ชฟๆ•ดๅปบ่ญฐ๏ผŒ็›ฎๆจ™ๆ˜ฏ่ฎ“้€™้ …ๆ–ฐๆ”ฟ็ญ–ๅœจ6ๆœˆไธŠ่ทฏใ€‚ | | ็ฟป่ญฏ | ไปฅไธ‹ๆไพ›่‹ฑๆ–‡ๅ…งๅฎน๏ผŒ่ซ‹ๅนซๆˆ‘็ฟป่ญฏๆˆไธญๆ–‡ใ€‚Dongshan coffee is famous for its unique position, and the constant refinement of production methods. The flavor is admired by many caffeine afficionados. | ๆฑๅฑฑๅ’–ๅ•กไปฅ็จ็‰น็š„ไฝ็ฝฎ๏ผŒๅŠ ไธŠไธๆ–ท็ฒพ้€ฒ็š„่ฃฝไฝœๆ‰‹ๆณ•่€Œ่žๅ๏ผŒ้ขจๅ‘ณๆ›ดๆ˜ฏ่ฎ“่จฑๅคšๅ’–ๅ•ก็™ก่ถจไน‹่‹ฅ้ถฉใ€‚ | # ไฝฟ็”จ่ชชๆ˜Ž * ็ฏ„ไพ‹็จ‹ๅผ * ็ฏ„ไพ‹็จ‹ๅผ่ˆ‡ๆ–‡ไปถๅพŒ็บŒๆœƒ้‡‹ๅ‡บๅˆฐgithub * prompt ๆจฃ็‰ˆ * ไธ€่ˆฌๅ•็ญ”็”จๆณ• ```python chat = [ {"role": "user", "content": "{question}"}, ] prompt = tokenizer.apply_chat_template(chat) ``` * ๅฐ‡ {question} ๆ›ฟๆ›ๆˆไฝฟ็”จ่€…็š„่ผธๅ…ฅ * ๅŠ ๅ…ฅ system prompt ็š„็”จๆณ• ```python chat = [ {"role": "system", "content": "{sys}"}, {"role": "user", "content": "{question}"}, ] prompt = tokenizer.apply_chat_template(chat) ``` * ๅฐ‡ {sys} ๆ›ฟๆ›ๆˆๆŒ‡ไปค๏ผŒไพ‹ๅฆ‚๏ผšไฝ ๆ˜ฏไธ€ๅ€‹ไพ†่‡ชๅฐ็ฃ็š„AIๅŠฉ็†๏ผŒไฝ ็š„ๅๅญ—ๆ˜ฏ TAIDE๏ผŒๆจ‚ๆ–ผไปฅๅฐ็ฃไบบ็š„็ซ‹ๅ ดๅนซๅŠฉไฝฟ็”จ่€…๏ผŒๆœƒ็”จ็น้ซ”ไธญๆ–‡ๅ›ž็ญ”ๅ•้กŒใ€‚ * ๅฐ‡ {question} ๆ›ฟๆ›ๆˆไฝฟ็”จ่€…็š„ๅ•้กŒ * ๅคš่ผชๅ•็ญ”็”จๆณ• ```python chat = [ {"role": "system", "content": "{sys}"}, {"role": "user", "content": "{question1}"}, {"role": "assistant", "content": "{model_anwer_1}"}, {"role": "user", "content": "{question2}"}, ] prompt = tokenizer.apply_chat_template(chat) ``` * ๅฐ‡ {sys} ๆ›ฟๆ›ๆˆๆŒ‡ไปค๏ผŒไพ‹ๅฆ‚๏ผšไฝ ๆ˜ฏไธ€ๅ€‹ไพ†่‡ชๅฐ็ฃ็š„AIๅŠฉ็†๏ผŒไฝ ็š„ๅๅญ—ๆ˜ฏ TAIDE๏ผŒๆจ‚ๆ–ผไปฅๅฐ็ฃไบบ็š„็ซ‹ๅ ดๅนซๅŠฉไฝฟ็”จ่€…๏ผŒๆœƒ็”จ็น้ซ”ไธญๆ–‡ๅ›ž็ญ”ๅ•้กŒใ€‚ * ๅฐ‡ {question1} ๆ›ฟๆ›ๆˆไฝฟ็”จ่€…็š„ๅ•้กŒ1 * ๅฐ‡ {model_anwer_1} ๆ›ฟๆ›ๆˆๆจกๅž‹็š„ๅ›ž็ญ”1 * ๅฐ‡ {question2} ๆ›ฟๆ›ๆˆไฝฟ็”จ่€…็š„ๅ•้กŒ2 * ๆ›ดๅคš็ดฐ็ฏ€่ซ‹ๅƒ่€ƒ[Llama3 ๆ–‡ไปถ](https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3/) # ่จ“็ทดๆ–นๆณ• * ่ปŸ็กฌ้ซ”่ฆๆ ผ * ๅœ‹็ถฒไธญๅฟƒ H100 * ่จ“็ทดๆก†ๆžถ: PyTorch * ่ณ‡ๆ–™ๅ‰่™•็† * ๅญ—ๅ…ƒๆจ™ๆบ–ๅŒ– * ๅŽป้™ค้‡่ฆ† * ๅŽป้™ค้›œ่จŠ * ็ถฒ้ ่ณ‡ๆ–™็š„html tagใ€javascript * ้žๆจ™ๆบ–ๅญ—ๅ…ƒๆˆ–ไบ‚็ขผ * ๅญ—ๆ•ธ้Ž็Ÿญ็š„ๆ–‡็ซ  * ๅŽป้™คๆ–‡็ซ ไธญ็š„็‰นๅฎšๆ ผๅผ๏ผŒๅฆ‚็‚บๆŽ’็‰ˆๅขžๅŠ ็š„ๆ›่กŒ * ๅŽป้™คๅ€‹่ณ‡๏ผŒๅฆ‚emailใ€้›ป่ฉฑ * ๅŽป้™คไธ็•ถๆ–‡ๅญ—๏ผŒๅฆ‚่ณญๅšใ€่‰ฒๆƒ…็ญ‰ * ๆŒ็บŒ้ ่จ“็ทด (continuous pretraining, CP) * ่ฃœๅ……ๅคง้‡ไพ†ๆบๅฏไฟก่ณด็š„็น้ซ”ไธญๆ–‡็Ÿฅ่ญ˜ * ่ถ…ๅƒๆ•ธ (hyper parameters) * optimizer: AdamW * learning rate: 1e-4 * batch size: 1M tokens * epoch: 1 * ๅพฎ่ชฟ (fine tune, FT) * ่ฎ“ๆจกๅž‹ๅฏ้‡ๅฐ็น้ซ”ไธญๆ–‡ๆๅ•ๅ›ž็ญ”ๅ•้กŒ * ่ถ…ๅƒๆ•ธ (hyper parameters) * optimizer: AdamW * learning rate: 5e-5 * batch size: 256K tokens * epoch: 3 # ่จ“็ทด่ณ‡ๆ–™ * ๆŒ็บŒ้ ่จ“็ทด่ณ‡ๆ–™(่ณ‡ๆ–™้‡็ด„็‚บ140G) | ่ณ‡ๆ–™้›† | ่ณ‡ๆ–™ๆ่ฟฐ | | --- | -------- | | ่จด่จŸ่ณ‡ๆ–™ | ใ€Š[ๅธๆณ•้™ข่ฃๅˆคๆ›ธ](https://judgment.judicial.gov.tw/FJUD/default.aspx)ใ€‹่‡ช2013ๅนด1ๆœˆ่‡ณ2023ๅนด12ๆœˆๅ„็ดšๆณ•้™ขๆฐ‘ไบ‹ใ€ๅˆ‘ไบ‹ใ€่กŒๆ”ฟ่จด่จŸ่ณ‡ๆ–™ใ€‚ | | ไธญๅคฎ็คพ | ใ€Š[ไธญๅคฎ็คพไธญๆ–‡ๆ–ฐ่ž](https://www.cna.com.tw/)ใ€‹่ณ‡ๆ–™้›†ๅซไธญๅคฎ็คพ่‡ช1993ๅนด6ๆœˆ่‡ณ2023ๅนด06ๆœˆ๏ผŒๅ…ฑ30ๅนดไปฝไน‹ๆฏๆ—ฅๆ–ฐ่žๆ–‡็ซ ใ€‚ๅ…งๅฎนๆถต่“‹ๅœ‹ๅ…งๅค–ๆ”ฟๆฒปใ€็คพๆœƒใ€่ฒก็ถ“ใ€ๆ–‡ๆ•™ใ€็”Ÿๆดป็ญ‰้ ˜ๅŸŸใ€‚ | | ETtoday ๆ–ฐ่ž้›ฒ | ใ€Š[ETtodayๆ–ฐ่ž้›ฒ](https://www.ettoday.net/)ใ€‹่ณ‡ๆ–™๏ผŒๅŒ…ๅซ่‡ช2011ๅนด10ๆœˆ่‡ณ 2023ๅนด12ๆœˆ็š„่ณ‡ๆ–™ใ€‚ | | ็ซ‹ๆณ•้™ขๅ…ฌๅ ฑ | ใ€Š[็ซ‹ๆณ•้™ขๅ…ฌๅ ฑ](https://ppg.ly.gov.tw/ppg/)ใ€‹ๅŒ…ๅซ่‡ช็ฌฌ8ๅฑ†็ฌฌ1ๆœƒๆœŸ่‡ณ็ฌฌ10ๅฑ†็ฌฌ7ๆœƒๆœŸไน‹ๅ…ฌๅ ฑ่ณ‡ๆ–™ใ€‚ | | ๅ‡บ็‰ˆๅ•†็ถฒ็ซ™ๆ›ธ็ฑไป‹็ดน | ๅŒ…ๅซ[ไธ‰้‡‡](https://www.suncolor.com.tw/)ใ€[Gotop](https://www.gotop.com.tw/)ๅ‡บ็‰ˆๅ•†็ถฒ็ซ™ไธŠ็š„ๆ›ธ็ฑ็ฐกไป‹ใ€‚ | | GRB ็ ”็ฉถ่จˆ็•ซๆ‘˜่ฆ | [GRB](https://www.grb.gov.tw/)็‚บๆ”ถ้Œ„็”ฑๆ”ฟๅบœ็ถ“่ฒป่ฃœๅŠฉไน‹็ ”็ฉถ่จˆ็•ซๅŠๅ…ถๆˆๆžœๅ ฑๅ‘Š็š„่ณ‡่จŠ็ณป็ตฑ๏ผŒๆญค่ณ‡ๆ–™้›†ไธป่ฆๆ”ถ้Œ„ 1993ๅนด่‡ณ 2023ๅนดไน‹็ ”็ฉถ่จˆ็•ซๆ‘˜่ฆไปฅๅŠ็ ”็ฉถๅ ฑๅ‘Šๆ‘˜่ฆ๏ผŒๅซไธญๆ–‡ๅŠๅ…ถ่‹ฑๆ–‡ๅฐ็…งใ€‚ | | ๅญธ่ก“ๆœƒ่ญฐ่ซ–ๆ–‡ๆ‘˜่ฆ | ๆ”ถ้Œ„ใ€Š[ๅญธ่ก“ๆœƒ่ญฐ่ซ–ๆ–‡ๆ‘˜่ฆ่ณ‡ๆ–™ๅบซ](https://sticnet.stpi.narl.org.tw/sticloc/ttscalle?meet:)ใ€‹ไธญ่‡ช1988่‡ณ2009ๅนด็”ฑๅฐ็ฃๆ‰€่ˆ‰่พฆไน‹ๅญธ่ก“ๆœƒ่ญฐ่ซ–ๆ–‡ใ€‚ | | ๅ…‰่ฏ้›œ่ชŒ | ใ€Š[ๅฐ็ฃๅ…‰่ฏ้›œ่ชŒ](https://www.taiwan-panorama.com/)ใ€‹ๅซ่‡ช1993ๅนด7ๆœˆ่‡ณ2023ๅนด6ๆœˆ็š„ๆ–‡็ซ ๏ผŒๅ…ฑ30ๅนดไปฝใ€‚ๅ…งๅฎน่‘—้‡ๆ–ผๆˆ‘ๅœ‹ๆ–‡ๅŒ–ใ€่ง€ๅ…‰่ˆ‡ๆฐ‘ๆƒ…็ญ‰ใ€‚ | | ๆจ‚่ฉž็ถฒ | ใ€Š[ๆจ‚่ฉž็ถฒ](https://terms.naer.edu.tw/)ใ€‹ๆถต่“‹ๆ–‡็†้ ˜ๅŸŸ็ด„187่ฌๅ‰‡ๅญธ่ก“ๅ่ฉžๅŠๅ…ถ่ญฏๅๅฐ็…งใ€‚ | | ๅ„้ƒจๆœƒ่ณ‡ๆ–™ | ๅŒ…ๅซ่กŒๆ”ฟ้™ขใ€Œ[ๅœ‹ๆƒ…็ฐกไป‹](https://www.ey.gov.tw/state/)ใ€ใ€ๆ–‡ๅŒ–้ƒจใ€Œ[ๅœ‹ๅฎถๆ–‡ๅŒ–่จ˜ๆ†ถๅบซ](https://memory.culture.tw/)ใ€ใ€ๅœ‹็™ผๆœƒใ€Œ[ๆช”ๆกˆๆ”ฏๆดๆ•™ๅญธ็ถฒ](https://art.archives.gov.tw/index.aspx)ใ€ใ€ไบค้€š้ƒจใ€Œ[ไบค้€šๅฎ‰ๅ…จๅ…ฅๅฃ็ถฒ](https://168.motc.gov.tw/)ใ€็ญ‰้ƒจๆœƒ็ถฒ็ซ™่ณ‡ๆ–™ไน‹้ƒจๅˆ†่ณ‡ๆ–™ใ€‚ | | ไปŠๅ‘จๅˆŠ | ใ€Š[ไปŠๅ‘จๅˆŠ](https://www.businesstoday.com.tw/)ใ€‹็‚บไธ€ไปฅ่ฒก็ถ“็‚บไธป็š„้€ฑๅˆŠ้›œ่ชŒ๏ผŒๆญค่ณ‡ๆ–™้›†ๆถต่“‹2008ๅนด1ๆœˆ่‡ณ2023ๅนด7ๆœˆ็š„ๆ–‡็ซ ใ€‚ | | ๆ•™่‚ฒ้ƒจๅœ‹่ชž่พญๅ…ธใ€ๆˆ่ชž่พญๅ…ธ | ๅŒ…ๅซไปฅไธ‹ไธ‰้ …่ณ‡ๆ–™:<br>[ๆ•™่‚ฒ้ƒจใ€Šๆˆ่ชžๅ…ธใ€‹](https://dict.idioms.moe.edu.tw/search.jsp?webMd=1&la=0)๏ผŒๅซ5,338ๆขๆˆ่ชž๏ผŒๅ…งๅฎนๅŒ…ๅซๆฏๆขๆˆ่ชž็š„้‡‹็พฉใ€ๅ…ธๆ•…ๅŽŸๆ–‡ๅŠๅ…ถ็™ฝ่ฉฑ่ชชๆ˜Žใ€็”จๆณ•่ชชๆ˜Žใ€ไพ‹ๅฅ็ญ‰ใ€‚<br>[ๆ•™่‚ฒ้ƒจใ€Š้‡็ทจๅœ‹่ชž่พญๅ…ธไฟฎ่จ‚ๆœฌใ€‹](https://dict.revised.moe.edu.tw/?la=0&powerMode=0)๏ผŒๆ”ถ้Œ„ไธญๆ–‡ๅ–ฎๅญ—ๅŠๅ„้กž่พญๅฝ™๏ผŒๅŒ…ๅซ่ฎ€้Ÿณใ€้ƒจ้ฆ–ใ€้‡‹็พฉ็ญ‰่ณ‡่จŠ๏ผŒๅ…ฑ็ด„165,539็ญ†่ณ‡ๆ–™ใ€‚<br>[ๆ•™่‚ฒ้ƒจใ€Šๅœ‹่ชž่พญๅ…ธ็ฐก็ทจๆœฌใ€‹](https://dict.concised.moe.edu.tw/?la=0&powerMode=0)๏ผŒ็‚บใ€Š้‡็ทจๅœ‹่ชž่พญๅ…ธไฟฎ่จ‚ๆœฌใ€‹็š„็ฐก็ทจ็‰ˆๆœฌ๏ผŒๅ…ฑ45,247็ญ†่ณ‡ๆ–™ใ€‚ | | ็ง‘ๆŠ€ๅคง่ง€ๅœ’่ณ‡ๆ–™ | ๅซใ€Š[็ง‘ๆŠ€ๅคง่ง€ๅœ’็ถฒ็ซ™](https://scitechvista.nat.gov.tw/)ใ€‹ไธŠ็š„็ง‘ๅญธๆ–ฐ็ŸฅไปฅๅŠ็ง‘ๆ™ฎๆ–‡็ซ ใ€‚ | | iKnow ็ง‘ๆŠ€็”ขๆฅญ่ณ‡่จŠๅฎค | ใ€Š[็ง‘ๆŠ€็”ขๆฅญ่ณ‡่จŠๅฎค](https://iknow.stpi.narl.org.tw/)๏ผˆiKnow๏ผ‰ใ€‹ๆไพ›ๅฐ็ฃๅŠๅ…จ็ƒ็š„็ง‘ๆŠ€ๅธ‚ๅ ด่ถจๅ‹ขใ€็ญ–็•ฅๅˆ†ๆžใ€ๅฐˆๅˆฉ็Ÿฅ่ญ˜๏ผŒๅŠๆŠ€่ก“ไบคๆ˜“่ณ‡่จŠ๏ผŒๅฐˆๆณจๆ–ผ็ง‘ๆŠ€็”ขๆฅญ็š„ๅ‰ตๆ–ฐ่ˆ‡็™ผๅฑ•๏ผŒๅŒ…ๅซ่‡ช 2008 ๅนด่‡ณ 2023 ๅนดใ€‚ | | ็ง‘ๅญธ็™ผๅฑ•ๆœˆๅˆŠ | ใ€Š[็ง‘ๅญธ็™ผๅฑ•ๆœˆๅˆŠ](https://ejournal.stpi.narl.org.tw/sd)ใ€‹็‚บๅœ‹็ง‘ๆœƒ็‚บๆŽจๅปฃ็ง‘ๅญธๆ•™่‚ฒ่€Œๅ‡บ็‰ˆ็š„็ง‘ๆ™ฎๅˆŠ็‰ฉ๏ผŒๅซ่‡ช2004ๅนด10ๆœˆ่‡ณ2020ๅนด12ๆœˆไน‹็ง‘ๆ™ฎๆ–‡็ซ ๏ผ›2021ๅนด่ตท๏ผŒไปฅใ€Š[็ง‘ๆŠ€้ญ…็™ฎ](https://www.charmingscitech.nat.gov.tw/)ใ€‹ๅญฃๅˆŠ้‡ๆ–ฐๅ‡บ็™ผ๏ผŒๆไพ›ๅœ‹้š›้—œๆณจ็ง‘ๆŠ€่ญฐ้กŒ็š„ๆ–ฐ็Ÿฅๆ–‡็ซ ใ€‚ | | ๆณ•่ฆ่ณ‡ๆ–™ๅบซ | ใ€Š[ๆณ•่ฆ่ณ‡ๆ–™ๅบซ](https://law.moj.gov.tw/)ใ€‹ๅซๆˆช่‡ช 112 ๅนด 10 ๆœˆๅ„ๆ”ฟๅบœ้ƒจ้–€ๆœ€ๆ–ฐ็™ผๅธƒไน‹ไธญๅคฎๆณ•่ฆใ€๏จˆๆ”ฟ่ฆๅ‰‡ใ€ๆณ•่ฆๅ‘ฝ๏ฆจ่‰ๆกˆๅŠๅœฐๆ–น่‡ชๆฒปๆณ•่ฆ็ญ‰ใ€‚ | | ๅ„ๅœฐๆ”ฟๅบœๆ—…้Š็ถฒ | ๆถต่“‹ๅฐ็ฃ้ƒจๅˆ†็ธฃๅธ‚ๅœฐๆ–นๆ”ฟๅบœ่ง€ๅ…‰ๆ—…้Š็ถฒ็ซ™ไธŠไน‹้ƒจๅˆ†่ณ‡ๆ–™ใ€‚ | | ๅœ‹ๆ•™้™ข่ชฒ็จ‹็ถฑ่ฆ(ๅไบŒๅนดๅœ‹ๆ•™) | ๅซๅไบŒๅนดๅœ‹ๆ•™่ชฒ็จ‹็ถฑ่ฆไน‹็ธฝ็ถฑไปฅๅŠๅ„็ดšๅญธๆ กไธๅŒ็ง‘็›ฎไน‹่ชฒ็จ‹็ถฑ่ฆใ€‚ | | ไธญๅคฎ็คพ่ญฏๅๆช”่ณ‡ๆ–™ๅบซ | ใ€Šไธญๅคฎ็คพ่ญฏๅๆช”่ณ‡ๆ–™ๅบซใ€‹่’้›†ไธญๅคฎ็คพๆ–ฐ่žๆฅญๅ‹™ไธŠ็ฟป่ญฏ้Ž็š„ไธญๅค–ๅง“ๆฐใ€ไบบๅใ€็ต„็น”ใ€ๅœฐๅ็ญ‰่ญฏๅๅฐ็…งใ€‚ | | ็ซฅ่ฉฑๆ›ธ | ๅ…ฑ 20 ๆœฌ็ซฅ่ฉฑๆ›ธ๏ผŒๅซๆนฏๅง†ๆญท้šช่จ˜ใ€ๅฐ้ฃ›ไฟ ใ€ๆ„›้บ—็ตฒๅคข้Šไป™ๅขƒใ€้•ท่…ฟๅ”ๅ”็ญ‰ใ€‚ | | RedPajama-Data-V2 | ๅพžๅœ‹ๅค–้–‹ๆ”พๅคšๅœ‹่ชž่จ€่ชžๆ–™ๅบซ [RedPajama-Data-v2](https://github.com/togethercomputer/RedPajama-Data) ๅ–ๅ‡บ่‹ฑๆ–‡่ณ‡ๆ–™ | | MathPile-commercial | ๅœ‹ๅค–้–‹ๆ”พๆ•ธๅญธ่ชžๆ–™ๅบซ [MathPile-commercial](https://huggingface.co/datasets/GAIR/MathPile_Commercial) | | ไธญๆ–‡็ถญๅŸบ็™พ็ง‘ | ใ€Š[ไธญๆ–‡็ถญๅŸบ็™พ็ง‘](https://zh.wikipedia.org/zh-tw/%E4%B8%AD%E6%96%87%E7%BB%B4%E5%9F%BA%E7%99%BE%E7%A7%91)ใ€‹ๆˆช่‡ณ2023ๅนด1ๆœˆๆ‰€ๆœ‰ๆข็›ฎ็š„ๅ…งๅฎนใ€‚ | | github-code-clean | ็‚บ github ้–‹ๆบ็จ‹ๅผ็ขผ่ณ‡ๆ–™้›†๏ผŒๅŽป้™คunlicense็š„็จ‹ๅผ็ขผๅ’Œๆ–‡ไปถใ€‚ | * ๅพฎ่ชฟ่ณ‡ๆ–™ * TAIDEๅœ˜้šŠ่จ“็ทดllama2็ณปๅˆ—ๆจกๅž‹ไพ†็”ข็”Ÿๅพฎ่ชฟ่ณ‡ๆ–™่ณ‡ๆ–™๏ผŒ็”ข็”Ÿ็š„ไปปๅ‹™ๅŒ…ๅซไธ–็•Œ็Ÿฅ่ญ˜ใ€ๅ‰ตๆ„ๅฏซไฝœใ€ๆ™ฎ้€šๅธธ่ญ˜ใ€็ฟป่ญฏใ€ๆ‘˜่ฆใ€็จ‹ๅผใ€ๅฐ็ฃๅƒนๅ€ผ็ญ‰ๅ–ฎ่ผชๆˆ–ๅคš่ผชๅฐ่ฉฑๅ•็ญ”ๅ…ฑ 128K ็ญ†ใ€‚ๅพฎ่ชฟ่ณ‡ๆ–™ๅพŒ็บŒๆœƒๅฐๅค–้‡‹ๅ‡บใ€‚ # ๆจกๅž‹่ฉ•ๆธฌ * taide-bench * ่ฉ•ๆธฌ่ณ‡ๆ–™ * ๅฏซๆ–‡็ซ ใ€ๅฏซไฟกใ€ๆ‘˜่ฆใ€่‹ฑ็ฟปไธญใ€ไธญ็ฟป่‹ฑ๏ผŒๅ…ฑ500้กŒ * ่ณ‡ๆ–™้€ฃ็ต: [taide-bench](https://huggingface.co/datasets/taide/taide-bench) * ่ฉ•ๆธฌๆ–นๆณ• * gpt4่ฉ•ๅˆ† * ่ฉ•ๅˆ†็จ‹ๅผ: [taide-bench-eval](https://github.com/taide-taiwan/taide-bench-eval) * ่ฉ•ๆธฌๅˆ†ๆ•ธ | ๆจกๅž‹ | ไธญ็ฟป่‹ฑ | ่‹ฑ็ฟปไธญ | ๆ‘˜่ฆ | ๅฏซๆ–‡็ซ  | ๅฏซไฟก | ๅนณๅ‡ | | --- | ----- | ----- | ---- | ---- | ---- | --- | | Llama3-TAIDE-LX-8B-Chat-Alpha1 | 7.770 | 8.280 | 8.495 | 9.605 | 8.950 | 8.620 | | GPT3.5 | 8.880 | 8.810 | 7.450 | 9.490 | 8.750 | 8.676 | | TAIDE-LX-7B-Chat | 7.165 | 7.685 | 7.720 | 9.635 | 9.110 | 8.263 | | LLAMA2 7B | 6.075 | 4.475 | 5.905 | 2.625 | 3.040 | 4.424 | | LLAMA2 13B | 6.480 | 6.135 | 6.110 | 2.565 | 3.000 | 4.858 | | LLAMA2 70B | 6.975 | 6.375 | 6.795 | 2.625 | 2.990 | 5.152 | # ๆŽˆๆฌŠๆขๆฌพ * [Llama3-TAIDE ๆจกๅž‹็คพ็พคๆŽˆๆฌŠๅŒๆ„ๆ›ธ](https://drive.google.com/file/d/12-Q0WWSjG0DW6CqJQm_jr5wUGRLeb-8p/view) # ๅ…่ฒฌ่ฒๆ˜Ž * LLM ๆจกๅž‹็”ฑๆ–ผ่จญ่จˆๆžถๆง‹็š„้™ๅˆถ๏ผŒไปฅๅŠ่ณ‡ๆ–™้›ฃๅ…ๆœ‰ๅ่ชค๏ผŒ่ชž่จ€ๆจกๅž‹็š„ไปปไฝ•ๅ›žๆ‡‰ไธไปฃ่กจ TAIDE ็ซ‹ๅ ด๏ผŒไฝฟ็”จๅ‰้œ€่ฆ้กๅค–ๅŠ ๅ…ฅๅฎ‰ๅ…จ้˜ฒ่ญทๆฉŸๅˆถ๏ผŒไธ”ๅ›žๆ‡‰ๅ…งๅฎนไนŸๅฏ่ƒฝๅŒ…ๅซไธๆญฃ็ขบ็š„่ณ‡่จŠ๏ผŒไฝฟ็”จ่€…่ซ‹ๅ‹ฟ็›กไฟกใ€‚ # ้–‹็™ผๅœ˜้šŠ * [https://taide.tw/index/teamList](https://taide.tw/index/teamList) # ็›ธ้—œ้€ฃ็ต * [TAIDEๅฎ˜็ถฒ](https://taide.tw/index) * [TAIDE Huggingface](https://huggingface.co/taide) * [TAIDE Github](https://github.com/taide-taiwan) * [Kuwa AI](https://kuwaai.org/) # Citation * [TAIDEๅฎ˜็ถฒ](https://taide.tw/index)
{"license": "other"}
chienweichang/Llama3-TAIDE-LX-8B-Chat-Alpha1-AWQ
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-05-02T02:47:53+00:00
text-generation
transformers
Quantizations of https://huggingface.co/HuggingFaceH4/zephyr-7b-beta # From original readme ## Model description - **Model type:** A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets. - **Language(s) (NLP):** Primarily English - **License:** MIT - **Finetuned from model:** [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) ...
{"language": ["en"], "license": "other", "tags": ["transformers", "gguf", "imatrix", "zephyr-7b-beta"], "pipeline_tag": "text-generation", "inference": false}
duyntnet/zephyr-7b-beta-imatrix-GGUF
null
[ "transformers", "gguf", "imatrix", "zephyr-7b-beta", "text-generation", "en", "license:other", "region:us" ]
null
2024-05-02T02:49:23+00:00
text-to-image
diffusers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿงจ diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "diffusers"}
rubbrband/sdxlHK_v097bCOSXL
null
[ "diffusers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
null
2024-05-02T02:49:52+00:00
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GOLM3 This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1033 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 80 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.8196 | 0.09 | 10 | 1.4618 | | 1.0741 | 0.18 | 20 | 0.5866 | | 0.3626 | 0.27 | 30 | 0.1793 | | 0.1587 | 0.36 | 40 | 0.1503 | | 0.1454 | 0.45 | 50 | 0.1476 | | 0.1461 | 0.54 | 60 | 0.1476 | | 0.143 | 0.63 | 70 | 0.1443 | | 0.1452 | 0.73 | 80 | 0.1424 | | 0.1345 | 0.82 | 90 | 0.1372 | | 0.1286 | 0.91 | 100 | 0.1308 | | 0.131 | 1.0 | 110 | 0.1317 | | 0.123 | 1.09 | 120 | 0.1265 | | 0.1161 | 1.18 | 130 | 0.1272 | | 0.1188 | 1.27 | 140 | 0.1170 | | 0.1197 | 1.36 | 150 | 0.1149 | | 0.1139 | 1.45 | 160 | 0.1146 | | 0.1122 | 1.54 | 170 | 0.1117 | | 0.1093 | 1.63 | 180 | 0.1098 | | 0.1093 | 1.72 | 190 | 0.1085 | | 0.1096 | 1.81 | 200 | 0.1068 | | 0.1086 | 1.9 | 210 | 0.1064 | | 0.107 | 1.99 | 220 | 0.1065 | | 0.0961 | 2.08 | 230 | 0.1069 | | 0.0946 | 2.18 | 240 | 0.1061 | | 0.0883 | 2.27 | 250 | 0.1048 | | 0.0933 | 2.36 | 260 | 0.1039 | | 0.0904 | 2.45 | 270 | 0.1058 | | 0.0862 | 2.54 | 280 | 0.1057 | | 0.0818 | 2.63 | 290 | 0.1047 | | 0.0852 | 2.72 | 300 | 0.1046 | | 0.0895 | 2.81 | 310 | 0.1048 | | 0.0914 | 2.9 | 320 | 0.1044 | | 0.0909 | 2.99 | 330 | 0.1033 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
{"license": "gemma", "tags": ["generated_from_trainer"], "base_model": "google/gemma-2b", "model-index": [{"name": "GOLM3", "results": []}]}
Litzy619/GOLM3
null
[ "safetensors", "generated_from_trainer", "base_model:google/gemma-2b", "license:gemma", "region:us" ]
null
2024-05-02T02:50:12+00:00
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GOLM2 This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1067 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 80 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.7996 | 0.09 | 10 | 1.4594 | | 0.9949 | 0.18 | 20 | 0.5804 | | 0.3011 | 0.27 | 30 | 0.1728 | | 0.1527 | 0.36 | 40 | 0.1498 | | 0.1447 | 0.45 | 50 | 0.1491 | | 0.1451 | 0.54 | 60 | 0.1476 | | 0.142 | 0.63 | 70 | 0.1447 | | 0.1422 | 0.73 | 80 | 0.1415 | | 0.1304 | 0.82 | 90 | 0.1346 | | 0.1241 | 0.91 | 100 | 0.1294 | | 0.1263 | 1.0 | 110 | 0.1294 | | 0.1163 | 1.09 | 120 | 0.1235 | | 0.1091 | 1.18 | 130 | 0.1217 | | 0.1143 | 1.27 | 140 | 0.1201 | | 0.1131 | 1.36 | 150 | 0.1149 | | 0.1127 | 1.45 | 160 | 0.1158 | | 0.1087 | 1.54 | 170 | 0.1139 | | 0.1086 | 1.63 | 180 | 0.1125 | | 0.1069 | 1.72 | 190 | 0.1120 | | 0.1027 | 1.81 | 200 | 0.1119 | | 0.1037 | 1.9 | 210 | 0.1109 | | 0.1072 | 1.99 | 220 | 0.1116 | | 0.0896 | 2.08 | 230 | 0.1104 | | 0.0918 | 2.18 | 240 | 0.1096 | | 0.0828 | 2.27 | 250 | 0.1071 | | 0.0861 | 2.36 | 260 | 0.1080 | | 0.0853 | 2.45 | 270 | 0.1093 | | 0.0809 | 2.54 | 280 | 0.1084 | | 0.0782 | 2.63 | 290 | 0.1076 | | 0.0814 | 2.72 | 300 | 0.1072 | | 0.0849 | 2.81 | 310 | 0.1074 | | 0.0838 | 2.9 | 320 | 0.1070 | | 0.0864 | 2.99 | 330 | 0.1067 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
{"license": "gemma", "tags": ["generated_from_trainer"], "base_model": "google/gemma-2b", "model-index": [{"name": "GOLM2", "results": []}]}
Litzy619/GOLM2
null
[ "safetensors", "generated_from_trainer", "base_model:google/gemma-2b", "license:gemma", "region:us" ]
null
2024-05-02T02:50:12+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
ikno/rinko_300_labeling_lr_1e-5
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:50:14+00:00
null
null
{}
Xrunner/hive-p
null
[ "region:us" ]
null
2024-05-02T02:50:38+00:00
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GOLM1 This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1016 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 80 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.7996 | 0.09 | 10 | 1.4084 | | 0.9949 | 0.18 | 20 | 0.5027 | | 0.3011 | 0.27 | 30 | 0.1578 | | 0.1527 | 0.36 | 40 | 0.1481 | | 0.1447 | 0.45 | 50 | 0.1469 | | 0.1451 | 0.54 | 60 | 0.1464 | | 0.142 | 0.63 | 70 | 0.1422 | | 0.1422 | 0.73 | 80 | 0.1372 | | 0.1304 | 0.82 | 90 | 0.1289 | | 0.1241 | 0.91 | 100 | 0.1269 | | 0.1263 | 1.0 | 110 | 0.1302 | | 0.1163 | 1.09 | 120 | 0.1185 | | 0.1091 | 1.18 | 130 | 0.1211 | | 0.1143 | 1.27 | 140 | 0.1143 | | 0.1131 | 1.36 | 150 | 0.1113 | | 0.1127 | 1.45 | 160 | 0.1115 | | 0.1087 | 1.54 | 170 | 0.1073 | | 0.1086 | 1.63 | 180 | 0.1064 | | 0.1069 | 1.72 | 190 | 0.1053 | | 0.1027 | 1.81 | 200 | 0.1047 | | 0.1037 | 1.9 | 210 | 0.1022 | | 0.1072 | 1.99 | 220 | 0.1029 | | 0.0896 | 2.08 | 230 | 0.1056 | | 0.0918 | 2.18 | 240 | 0.1024 | | 0.0828 | 2.27 | 250 | 0.1026 | | 0.0861 | 2.36 | 260 | 0.1022 | | 0.0853 | 2.45 | 270 | 0.1049 | | 0.0809 | 2.54 | 280 | 0.1028 | | 0.0782 | 2.63 | 290 | 0.1021 | | 0.0814 | 2.72 | 300 | 0.1021 | | 0.0849 | 2.81 | 310 | 0.1019 | | 0.0838 | 2.9 | 320 | 0.1016 | | 0.0864 | 2.99 | 330 | 0.1016 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
{"license": "gemma", "tags": ["generated_from_trainer"], "base_model": "google/gemma-2b", "model-index": [{"name": "GOLM1", "results": []}]}
Litzy619/GOLM1
null
[ "safetensors", "generated_from_trainer", "base_model:google/gemma-2b", "license:gemma", "region:us" ]
null
2024-05-02T02:51:09+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Chilean Spanish Large v3 This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the Mezosky/es_clinical_assistance_10k dataset. It achieves the following results on the evaluation set: - Loss: 0.0961 - Wer: 6.9352 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 2000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.2816 | 0.17 | 100 | 0.2250 | 11.2827 | | 0.1505 | 0.34 | 200 | 0.1479 | 9.8196 | | 0.1293 | 0.51 | 300 | 0.1350 | 72.1192 | | 0.1221 | 0.69 | 400 | 0.1292 | 9.6825 | | 0.141 | 0.86 | 500 | 0.1194 | 53.0899 | | 0.0922 | 1.03 | 600 | 0.1150 | 12.0380 | | 0.0773 | 1.2 | 700 | 0.1079 | 12.8661 | | 0.0745 | 1.37 | 800 | 0.1036 | 67.3017 | | 0.0699 | 1.54 | 900 | 0.1016 | 8.2697 | | 0.0917 | 1.72 | 1000 | 0.0956 | 8.6334 | | 0.0716 | 1.89 | 1100 | 0.0968 | 7.7997 | | 0.0441 | 2.06 | 1200 | 0.0946 | 8.3760 | | 0.0377 | 2.23 | 1300 | 0.0963 | 7.6178 | | 0.0417 | 2.4 | 1400 | 0.0951 | 7.5703 | | 0.0409 | 2.57 | 1500 | 0.0926 | 7.2681 | | 0.0356 | 2.74 | 1600 | 0.0912 | 6.8933 | | 0.0361 | 2.92 | 1700 | 0.0918 | 7.0835 | | 0.0215 | 3.09 | 1800 | 0.0938 | 6.9548 | | 0.018 | 3.26 | 1900 | 0.0960 | 6.6415 | | 0.0196 | 3.43 | 2000 | 0.0961 | 6.9352 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"language": ["es"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["Mezosky/es_clinical_assistance_10k"], "metrics": ["wer"], "base_model": "openai/whisper-large-v3", "model-index": [{"name": "Whisper Chilean Spanish Large v3", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Mezosky/es_clinical_assistance_10k", "type": "Mezosky/es_clinical_assistance_10k"}, "metrics": [{"type": "wer", "value": 6.935235697300322, "name": "Wer"}]}]}]}
Mezosky/whisper-large-v3-clinical-assistance
null
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "es", "dataset:Mezosky/es_clinical_assistance_10k", "base_model:openai/whisper-large-v3", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:51:32+00:00
null
null
{}
foureyednymph/portals-large-lora-pop-colab
null
[ "region:us" ]
null
2024-05-02T02:51:40+00:00
null
null
{}
msy78/cn_sdxl__100sets
null
[ "region:us" ]
null
2024-05-02T02:52:44+00:00
null
transformers
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Syed-Hasan-8503/Versatile-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": "Syed-Hasan-8503/Versatile-7B", "quantized_by": "mradermacher"}
mradermacher/Versatile-7B-GGUF
null
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Syed-Hasan-8503/Versatile-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:54:06+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper4 This model is a fine-tuned version of [openai/whisper-tiny.en](https://huggingface.co/openai/whisper-tiny.en) on the tiny dataset. It achieves the following results on the evaluation set: - Loss: 0.5409 - Wer: 28.2719 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 128 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 300 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:-------:| | 3.8231 | 0.2778 | 10 | 3.7088 | 76.9377 | | 3.1925 | 0.5556 | 20 | 2.9439 | 65.5654 | | 2.1383 | 0.8333 | 30 | 1.7221 | 61.5311 | | 1.0671 | 1.1111 | 40 | 0.8320 | 50.6989 | | 0.6947 | 1.3889 | 50 | 0.6587 | 41.0102 | | 0.6263 | 1.6667 | 60 | 0.5874 | 29.7967 | | 0.5827 | 1.9444 | 70 | 0.5402 | 27.3825 | | 0.4222 | 2.2222 | 80 | 0.5154 | 32.0521 | | 0.4065 | 2.5 | 90 | 0.4997 | 25.6989 | | 0.3959 | 2.7778 | 100 | 0.4804 | 23.8247 | | 0.3081 | 3.0556 | 110 | 0.4670 | 24.8412 | | 0.2497 | 3.3333 | 120 | 0.4687 | 23.2846 | | 0.2535 | 3.6111 | 130 | 0.4594 | 23.0940 | | 0.2428 | 3.8889 | 140 | 0.4545 | 23.5070 | | 0.1627 | 4.1667 | 150 | 0.4651 | 24.4917 | | 0.1224 | 4.4444 | 160 | 0.4686 | 23.6976 | | 0.1326 | 4.7222 | 170 | 0.4653 | 23.6976 | | 0.1334 | 5.0 | 180 | 0.4741 | 24.7459 | | 0.0659 | 5.2778 | 190 | 0.4792 | 24.6823 | | 0.0639 | 5.5556 | 200 | 0.4760 | 33.3863 | | 0.0667 | 5.8333 | 210 | 0.4820 | 25.4765 | | 0.042 | 6.1111 | 220 | 0.4933 | 29.4155 | | 0.0325 | 6.3889 | 230 | 0.5066 | 29.9873 | | 0.0333 | 6.6667 | 240 | 0.5126 | 26.0801 | | 0.0333 | 6.9444 | 250 | 0.5073 | 24.6188 | | 0.0187 | 7.2222 | 260 | 0.5129 | 27.3507 | | 0.0214 | 7.5 | 270 | 0.5209 | 28.2084 | | 0.0187 | 7.7778 | 280 | 0.5213 | 29.3202 | | 0.0312 | 8.0556 | 290 | 0.5274 | 34.6569 | | 0.0172 | 8.3333 | 300 | 0.5409 | 28.2719 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.1.dev0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["wer"], "base_model": "openai/whisper-tiny.en", "model-index": [{"name": "whisper4", "results": []}]}
khaingsmon/whisper4
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-tiny.en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:54:26+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["unsloth"]}
vkrishanan569/tinyllamaforlaw
null
[ "transformers", "pytorch", "gguf", "llama", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T02:55:12+00:00
null
transformers
# Uploaded model - **Developed by:** xkiwilabs - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
xkiwilabs/lora_opLLama3_modelv7
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:55:36+00:00
text-generation
transformers
{}
vkrishanan569/llama2_for_law
null
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T02:55:57+00:00
null
null
{}
foureyednymph/portals-large-lora-melody-colab
null
[ "region:us" ]
null
2024-05-02T02:56:28+00:00
text-to-audio
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ceb_b128_le5_s4000 This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3943 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:--------:|:----:|:---------------:| | 0.5272 | 39.2157 | 500 | 0.4584 | | 0.4634 | 78.4314 | 1000 | 0.4220 | | 0.4466 | 117.6471 | 1500 | 0.4104 | | 0.4289 | 156.8627 | 2000 | 0.4017 | | 0.4223 | 196.0784 | 2500 | 0.3969 | | 0.4149 | 235.2941 | 3000 | 0.3960 | | 0.4129 | 274.5098 | 3500 | 0.3962 | | 0.4108 | 313.7255 | 4000 | 0.3943 | ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/speecht5_tts", "model-index": [{"name": "ceb_b128_le5_s4000", "results": []}]}
mikhail-panzo/ceb_b128_le5_s4000
null
[ "transformers", "tensorboard", "safetensors", "speecht5", "text-to-audio", "generated_from_trainer", "base_model:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:56:37+00:00
text-generation
transformers
# Uploaded model - **Developed by:** theGhoul21 - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"}
theGhoul21/srl-sft-010524-16bit-v0.2
null
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:58:48+00:00
fill-mask
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
wuzhongyanqiu/dummy-model
null
[ "transformers", "safetensors", "camembert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:59:23+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # miniCPM-2b-chatml This model is a fine-tuned version of [openbmb/MiniCPM-2B-128k](https://huggingface.co/openbmb/MiniCPM-2B-128k) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.38.2 - Pytorch 2.1.2+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "openbmb/MiniCPM-2B-128k", "model-index": [{"name": "miniCPM-2b-chatml", "results": []}]}
DuongTrongChi/miniCPM-2b-chatml
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:openbmb/MiniCPM-2B-128k", "region:us" ]
null
2024-05-02T02:59:31+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
DuongTrongChi/minicpm-test
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:59:41+00:00
null
transformers
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Weyaxi/Bagel-Hermes-34B-Slerp <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.Q2_K.gguf) | Q2_K | 12.9 | | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.IQ3_XS.gguf) | IQ3_XS | 14.3 | | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.Q3_K_S.gguf) | Q3_K_S | 15.1 | | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.IQ3_S.gguf) | IQ3_S | 15.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.IQ3_M.gguf) | IQ3_M | 15.7 | | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.Q3_K_M.gguf) | Q3_K_M | 16.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.Q3_K_L.gguf) | Q3_K_L | 18.2 | | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.IQ4_XS.gguf) | IQ4_XS | 18.7 | | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.Q4_K_S.gguf) | Q4_K_S | 19.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.Q4_K_M.gguf) | Q4_K_M | 20.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.Q5_K_S.gguf) | Q5_K_S | 23.8 | | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.Q5_K_M.gguf) | Q5_K_M | 24.4 | | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.Q6_K.gguf) | Q6_K | 28.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.Q8_0.gguf) | Q8_0 | 36.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "other", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": "Weyaxi/Bagel-Hermes-34B-Slerp", "license_link": "https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE", "license_name": "yi-license", "quantized_by": "mradermacher"}
mradermacher/Bagel-Hermes-34B-Slerp-GGUF
null
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Weyaxi/Bagel-Hermes-34B-Slerp", "license:other", "endpoints_compatible", "region:us" ]
null
2024-05-02T03:02:01+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.0001_withdpo_4iters_bs256_5101lr_iter_4 This model is a fine-tuned version of [ShenaoZ/0.0001_withdpo_4iters_bs256_511lr_iter_3](https://huggingface.co/ShenaoZ/0.0001_withdpo_4iters_bs256_511lr_iter_3) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-08 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZ/0.0001_withdpo_4iters_bs256_511lr_iter_3", "model-index": [{"name": "0.0001_withdpo_4iters_bs256_5101lr_iter_4", "results": []}]}
ShenaoZ/0.0001_withdpo_4iters_bs256_5101lr_iter_4
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:updated", "dataset:original", "base_model:ShenaoZ/0.0001_withdpo_4iters_bs256_511lr_iter_3", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T03:02:08+00:00
null
null
{}
Yicheng1215/HAN462
null
[ "region:us" ]
null
2024-05-02T03:03:15+00:00
null
null
{}
tegs001/retrieval_safetensors
null
[ "region:us" ]
null
2024-05-02T03:04:03+00:00
text-generation
transformers
I'm back and doing well! I've got a job in the field now, so we'll see in the long run how that effects my open source output. Here we have a 11b Llama 3 instruct model for future work. EDIT: Made a yaml mistake with part funnel, but it still works well. --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/633a809fa4a8f33508dce32c/jJxgpSwdSal2XWsJ0KlG8.png) This is a merge stock of 3 models: - Part Wave - Part Block - Part Funnel With Part Funnel as the base. --- Part Wave: - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [0, 12] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [8, 18] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [13, 23] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [18, 32] --- Part Block: - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [0, 15] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [8, 23] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [16, 32] --- Part Funnel: - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [0, 15] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [14, 14] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [13, 13] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [12, 12] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [11, 11] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [10, 10] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [9, 9] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [8, 23] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [22, 22] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [21, 21] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [20, 20] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [19, 19] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [18, 18] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [17, 17] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [16, 32]
{"license": "llama3"}
athirdpath/Llama-3-11b-Instruct
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T03:04:58+00:00
text-classification
transformers
We have finetuned BERT base on Brown corpus to predict if a text was written using a non-fiction straight forward style or a fictional descriptive style. Here fiction and non-fiction only refers to the writing style and not to factual correctness of whats written in the text. For more details, please see this blog and the referenced paper: https://bekushal.medium.com/fictometer-a-simple-and-explainable-algorithm-for-sentiment-analysis-31186d2a8c7e ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer # Load the model model_name = "bekushal/FictoBERT" model = AutoModelForSequenceClassification.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) # Example input text input_text = "It is a sunny day with a nice wind blowing and I am feeling very happy." # Preprocess the input inputs = tokenizer(input_text, return_tensors="pt", truncation=True, padding=True) # Perform inference outputs = model(**inputs) # Get predicted class probabilities predicted_probabilities = outputs.logits.softmax(dim=-1) # Get predicted class label predicted_label = predicted_probabilities.argmax().item() # Convert predicted label to human-readable format predicted_class = "fiction" if predicted_label == 1 else "non-fiction" # Display results print("Predicted class:", predicted_class) print("Predicted class probabilities [non-fiction, fiction]:", predicted_probabilities)``` --- license: apache-2.0 ---
{}
bekushal/FictoBERT
null
[ "transformers", "safetensors", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-02T03:05:30+00:00
null
transformers
# Uploaded model - **Developed by:** jurieyel - **License:** apache-2.0 - **Finetuned from model :** unsloth/tinyllama-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/tinyllama-bnb-4bit"}
jurieyel/text2sql-tinyllama-4bit
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/tinyllama-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T03:05:49+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["unsloth"]}
jurieyel/text2sql-tinyllama-4bit-tokenizer
null
[ "transformers", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-02T03:05:59+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1406 - F1: 0.8628 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.26 | 1.0 | 525 | 0.1569 | 0.8259 | | 0.1256 | 2.0 | 1050 | 0.1514 | 0.8406 | | 0.0786 | 3.0 | 1575 | 0.1406 | 0.8628 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.3.0 - Datasets 2.19.0 - Tokenizers 0.19.1
{"tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "xlm-roberta-base", "model-index": [{"name": "xlm-roberta-base-finetuned-panx-de", "results": []}]}
gopidon/xlm-roberta-base-finetuned-panx-de
null
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:xlm-roberta-base", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-02T03:06:22+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
hi000000/insta_llama2-koen
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-02T03:06:25+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mi-super-modelo This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6450 - Accuracy: 0.125 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.6111 | 0.5 | 5 | 1.6301 | 0.175 | | 1.6035 | 1.0 | 10 | 1.6450 | 0.125 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "bert-base-cased", "model-index": [{"name": "mi-super-modelo", "results": []}]}
miguelpr/mi-super-modelo
null
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-02T03:06:40+00:00
text-generation
transformers
Experimental: Tiefighter Expanded to 15.6 B / 48 layers. # D_AU-Tiefighter-Expanded-13B-pass D_AU-Tiefighter-Expanded-13B-pass is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [KoboldAI/LLaMA2-13B-Tiefighter](https://huggingface.co/KoboldAI/LLaMA2-13B-Tiefighter) * [KoboldAI/LLaMA2-13B-Tiefighter](https://huggingface.co/KoboldAI/LLaMA2-13B-Tiefighter) ## ๐Ÿงฉ Configuration ```yaml slices: - sources: - model: KoboldAI/LLaMA2-13B-Tiefighter layer_range: [0, 40] - sources: - model: KoboldAI/LLaMA2-13B-Tiefighter layer_range: [32, 40] merge_method: passthrough dtype: bfloat16 ``` ## ๐Ÿ’ป Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "DavidAU/D_AU-Tiefighter-Expanded-13B-pass" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"tags": ["merge", "mergekit", "lazymergekit", "KoboldAI/LLaMA2-13B-Tiefighter"], "base_model": ["KoboldAI/LLaMA2-13B-Tiefighter", "KoboldAI/LLaMA2-13B-Tiefighter"]}
DavidAU/D_AU-Tiefighter-Expanded-13B-pass
null
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "KoboldAI/LLaMA2-13B-Tiefighter", "base_model:KoboldAI/LLaMA2-13B-Tiefighter", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T03:07:53+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # S1_InstructionGeneratorDelta This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0789 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 24 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 173 | 0.1487 | | No log | 2.0 | 346 | 0.1222 | | 0.3431 | 3.0 | 519 | 0.1085 | | 0.3431 | 4.0 | 692 | 0.0998 | | 0.3431 | 5.0 | 865 | 0.0949 | | 0.1133 | 6.0 | 1038 | 0.0911 | | 0.1133 | 7.0 | 1211 | 0.0889 | | 0.1133 | 8.0 | 1384 | 0.0873 | | 0.1007 | 9.0 | 1557 | 0.0855 | | 0.1007 | 10.0 | 1730 | 0.0842 | | 0.1007 | 11.0 | 1903 | 0.0831 | | 0.095 | 12.0 | 2076 | 0.0823 | | 0.095 | 13.0 | 2249 | 0.0817 | | 0.095 | 14.0 | 2422 | 0.0810 | | 0.0913 | 15.0 | 2595 | 0.0807 | | 0.0913 | 16.0 | 2768 | 0.0803 | | 0.0913 | 17.0 | 2941 | 0.0800 | | 0.0892 | 18.0 | 3114 | 0.0797 | | 0.0892 | 19.0 | 3287 | 0.0794 | | 0.0892 | 20.0 | 3460 | 0.0793 | | 0.0879 | 21.0 | 3633 | 0.0791 | | 0.0879 | 22.0 | 3806 | 0.0790 | | 0.0879 | 23.0 | 3979 | 0.0790 | | 0.0868 | 24.0 | 4152 | 0.0789 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilgpt2", "model-index": [{"name": "S1_InstructionGeneratorDelta", "results": []}]}
gjonesQ02/S1_InstructionGeneratorDelta
null
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:distilgpt2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T03:08:02+00:00
null
transformers
# Uploaded model - **Developed by:** theGhoul21 - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "gguf"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"}
theGhoul21/srl-sft-010524-Q8_0-v0.2
null
[ "transformers", "gguf", "mistral", "text-generation-inference", "unsloth", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T03:09:35+00:00
text-generation
allennlp
{"language": ["en", "ar"], "license": "apache-2.0", "library_name": "allennlp", "tags": ["finance", "legal"], "metrics": ["google_bleu"], "pipeline_tag": "text-generation"}
onlyonekinginxxxcommunity/ARTICWOLFGLOBAL
null
[ "allennlp", "finance", "legal", "text-generation", "en", "ar", "license:apache-2.0", "region:us" ]
null
2024-05-02T03:09:55+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisperFinetuneFinal This model is a fine-tuned version of [openai/whisper-tiny.en](https://huggingface.co/openai/whisper-tiny.en) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5711 - Wer: 22.2920 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 128 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:-------:| | 2.8037 | 0.2778 | 10 | 1.8972 | 34.0973 | | 0.8101 | 0.5556 | 20 | 0.6863 | 28.6970 | | 0.5778 | 0.8333 | 30 | 0.5491 | 22.8885 | | 0.4375 | 1.1111 | 40 | 0.4962 | 20.7535 | | 0.3437 | 1.3889 | 50 | 0.4767 | 20.2512 | | 0.3277 | 1.6667 | 60 | 0.4921 | 21.0047 | | 0.3431 | 1.9444 | 70 | 0.4972 | 20.4082 | | 0.1331 | 2.2222 | 80 | 0.5317 | 25.7143 | | 0.1385 | 2.5 | 90 | 0.5308 | 20.4396 | | 0.146 | 2.7778 | 100 | 0.5711 | 22.2920 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.1.dev0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["wer"], "base_model": "openai/whisper-tiny.en", "model-index": [{"name": "whisperFinetuneFinal", "results": []}]}
shljessie/whisperFinetuneFinal
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-tiny.en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T03:13:10+00:00
image-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-base-patch4-window8-256-finetuned-ind-17-imbalanced-aadhaarmask This model is a fine-tuned version of [microsoft/swinv2-base-patch4-window8-256](https://huggingface.co/microsoft/swinv2-base-patch4-window8-256) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3463 - Accuracy: 0.8463 - Recall: 0.8463 - F1: 0.8464 - Precision: 0.8482 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 | Precision | |:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:------:|:---------:| | No log | 0.9974 | 293 | 0.6222 | 0.7901 | 0.7901 | 0.7737 | 0.7747 | | No log | 1.9983 | 587 | 0.4901 | 0.8063 | 0.8063 | 0.7998 | 0.8066 | | No log | 2.9991 | 881 | 0.4374 | 0.8225 | 0.8225 | 0.8170 | 0.8356 | | No log | 4.0 | 1175 | 0.4262 | 0.8340 | 0.8340 | 0.8270 | 0.8541 | | No log | 4.9974 | 1468 | 0.4079 | 0.8310 | 0.8310 | 0.8290 | 0.8379 | | No log | 5.9983 | 1762 | 0.4117 | 0.8370 | 0.8370 | 0.8361 | 0.8509 | | No log | 6.9991 | 2056 | 0.3807 | 0.8370 | 0.8370 | 0.8361 | 0.8416 | | No log | 8.0 | 2350 | 0.3419 | 0.8595 | 0.8595 | 0.8583 | 0.8609 | | No log | 8.9974 | 2643 | 0.3628 | 0.8438 | 0.8438 | 0.8424 | 0.8448 | | 0.4492 | 9.9745 | 2930 | 0.3638 | 0.8399 | 0.8399 | 0.8394 | 0.8410 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.0a0+81ea7a4 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy", "recall", "f1", "precision"], "base_model": "microsoft/swinv2-base-patch4-window8-256", "model-index": [{"name": "swinv2-base-patch4-window8-256-finetuned-ind-17-imbalanced-aadhaarmask", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.8463175819497658, "name": "Accuracy"}, {"type": "recall", "value": 0.8463175819497658, "name": "Recall"}, {"type": "f1", "value": 0.8463640211224454, "name": "F1"}, {"type": "precision", "value": 0.8481964005333177, "name": "Precision"}]}]}]}
Kushagra07/swinv2-base-patch4-window8-256-finetuned-ind-17-imbalanced-aadhaarmask
null
[ "transformers", "tensorboard", "safetensors", "swinv2", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swinv2-base-patch4-window8-256", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-02T03:14:33+00:00
automatic-speech-recognition
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
JunWorks/Quantized_4bit_WhisperSmallOri_FP16
null
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "4-bit", "region:us" ]
null
2024-05-02T03:18:14+00:00
null
diffusers
{}
CHU1226/sd-class-butterflies-32
null
[ "diffusers", "safetensors", "diffusers:DDPMPipeline", "region:us" ]
null
2024-05-02T03:18:46+00:00
null
transformers
# Uploaded model - **Developed by:** theGhoul21 - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "gguf"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"}
theGhoul21/srl-sft-010524-gguf-16bit-v0.2
null
[ "transformers", "gguf", "mistral", "text-generation-inference", "unsloth", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T03:18:54+00:00
text2text-generation
transformers
{"license": "mit"}
hemanthlepcha0/grammar-correction
null
[ "transformers", "pytorch", "m2m_100", "text2text-generation", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-02T03:20:04+00:00
null
transformers
# Llama-3-KoEn-8B-xtuner-llava-preview ๐ŸŒ‹ <!-- Provide a quick summary of what the model is/does. --> Llama-3-KoEn-8B-xtuner-llava-preview ๐ŸŒ‹ is Korean based MutliModal based on Llava architecture, merged with [ChatVector](https://arxiv.org/abs/2310.04799) methods leveraging 2 models: 1) [beomi/Llama-3-KoEn-8B-preview](https://huggingface.co/beomi/Llama-3-KoEn-8B-preview) 2) [xtuner/llava-llama-3-8b-transformers](https://huggingface.co/xtuner/llava-llama-3-8b-transformers) ## Model Details ### Model Description - **Developed by:** Junbum Lee (Beomi) - **Model type:** HuggingFace Llava ๐ŸŒ‹ - **Language(s) (NLP):** Korean, English - **License:** cc-by-nc-sa-4.0 under Llama3 License - **Merged from model:** [beomi/Llama-3-KoEn-8B-preview](https://huggingface.co/beomi/Llama-3-KoEn-8B-preview) & [xtuner/llava-llama-3-8b-transformers](https://huggingface.co/xtuner/llava-llama-3-8b-transformers) ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> ![Cat walking on frozen Han-River, Seoul](https://cdn-uploads.huggingface.co/production/uploads/5e56829137cb5b49818287ea/NWfoArWI4UPAxpEnolkwT.jpeg) > Two version recommended > > v1. `revision='a38aac3'`: Basic ChatVector, with [25B+ trained KoEn ckpt(rev. d4d25a2)](https://huggingface.co/beomi/Llama-3-KoEn-8B-preview/commit/d4d25a2). > > v1-1. `revision='0224971'`: Basic ChatVector, with [40B+ trained KoEn ckpt(rev. ad39b32)](https://huggingface.co/beomi/Llama-3-KoEn-8B-preview/commit/ad39b32cd4207f37f61f16e79d3f4020c5b744ef). > > v2. `revision='4f04d1e'`: Model diff based merging(ref. https://huggingface.co/blog/maywell/llm-feature-transfer), with [25B+ trained KoEn ckpt(rev. d4d25a2)](https://huggingface.co/beomi/Llama-3-KoEn-8B-preview/commit/d4d25a2). ```python import requests from PIL import Image import torch from transformers import AutoProcessor, LlavaForConditionalGeneration model_id = "beomi/Llama-3-KoEn-8B-xtuner-llava-preview" model = LlavaForConditionalGeneration.from_pretrained( model_id, torch_dtype='auto', device_map='auto', revision='a38aac3', # 'a38aac3' for basic ChatVector, '4f04d1e' for Model diff based merging(ref. https://huggingface.co/blog/maywell/llm-feature-transfer) ) processor = AutoProcessor.from_pretrained(model_id) from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('./llava-llama-3-KoEn-8b-v1_1-transformers') terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] prompt = ("<|start_header_id|>user<|end_header_id|>\n\n<image>\n์ด ์ด๋ฏธ์ง€์— ๋Œ€ํ•ด์„œ ์„ค๋ช…ํ•ด์ฃผ์„ธ์š”.<|eot_id|>" "<|start_header_id|>assistant<|end_header_id|>\n\n์ด ์ด๋ฏธ์ง€์—๋Š”") image_file = "https://cdn-uploads.huggingface.co/production/uploads/5e56829137cb5b49818287ea/NWfoArWI4UPAxpEnolkwT.jpeg" raw_image = Image.open(requests.get(image_file, stream=True).raw) inputs = processor(prompt, raw_image, return_tensors='pt').to(0, torch.float16) output = model.generate(**inputs, max_new_tokens=400, do_sample=True, eos_token_id=terminators,) print(processor.decode(output[0][2:], skip_special_tokens=False)) # --- Example Output [v1, Chat Vector] --- user<|end_header_id|> <image> ์ด ์ด๋ฏธ์ง€์— ๋Œ€ํ•ด์„œ ์„ค๋ช…ํ•ด์ฃผ์„ธ์š”.<|eot_id|><|start_header_id|>assistant<|end_header_id|> ์ด ์ด๋ฏธ์ง€์—๋Š” ๊ณ ์–‘์ด ํ•œ ๋งˆ๋ฆฌ๊ฐ€ ๊ฐ•๋ฌผ ์œ„๋ฅผ ๊ฑธ์–ด๊ฐ€๋Š” ๋ชจ์Šต์ด ๋ณด์—ฌ์ง‘๋‹ˆ๋‹ค. ๊ณ ์–‘์ด๋Š” ๊ฐ•๋ฌผ์˜ ์ž”๋ฌผ๊ฒฐ์— ๋ฏธ๋„๋Ÿผ์„ ํƒ€๊ณ  ๊ฐ• ๊ฐ€๋กœ๋ฅผ ์ง€๋‚˜๋Š” ๋ฐ ๋Šฅ์ˆ™ํ•˜๊ฒŒ ๋ณด์ž…๋‹ˆ๋‹ค. ๊ณ ์–‘์ด์˜ ๋ฐœ์€ ๊ฐ•๋ฌผ๋กœ ์ž˜ ๋“ค์–ด๊ฐ€, ๊ทธ๊ฒƒ์„ ์ฆ๊ธฐ๋ฉฐ ๊ฑธ์–ด๊ฐ‘๋‹ˆ๋‹ค. ๋˜ํ•œ ์ด ์ด๋ฏธ์ง€๋„ ์Œ์„ฑ ๋…น์Œ์„ ํ•˜๊ฑฐ๋‚˜ ๋…นํ™”๋œ ์ž๋ฃŒ๋กœ ์ œ์ž‘๋˜์—ˆ์œผ๋ฉฐ, ์ฃผ๋กœ ๊ณ ์–‘์ด์˜ ๋ชจ์Šต์„ ๊ฐ•ํ•˜๊ฒŒ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค. ์†Œ๋ฆฌ ํšจ๊ณผ๋„ ์—ฌ๋Ÿฌ ๊ฐ€์ง€๋กœ ์ถ”๊ฐ€ํ•˜์—ฌ ๊ณ ์–‘์ด์˜ ์Šคํ† ๋ฆฌ๋ฅผ ๋‹ค์–‘ํ•˜๊ฒŒ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. ๊ฐ•๋ฌผ์€ ์ž”๋ฌผ๊ฒฐ์„ ๋‚˜ํƒ€๋‚ด๋ฉฐ ๊ฐ•๋ฌผ ์œ„๋ฅผ ๊ฑท๋Š” ๊ณ ์–‘์ด์˜ ๋ชจ์Šต์„ ๋”์šฑ ๊ฐ•๋ ฌํ•˜๊ฒŒ ๊ฐ•์กฐํ•˜๊ธฐ ์œ„ํ•ด ์ž”๋ฌผ๊ฒฐ์„ ํ†ตํ•ด ๋” ๋””ํ…Œ์ผํ•œ ์žฅ๋ฉด์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค.<|eot_id|> # --- Example Output [v1-1, Chat Vector] --- user<|end_header_id|> <image> ์ด ์ด๋ฏธ์ง€์— ๋Œ€ํ•ด์„œ ์„ค๋ช…ํ•ด์ฃผ์„ธ์š”.<|eot_id|><|start_header_id|>assistant<|end_header_id|> ์ด ์ด๋ฏธ์ง€์—์„œ๋Š” ํ•œ ๊ณ ์–‘์ด๊ฐ€ ์„œํ•ด์•ˆ์— ์œ„์น˜ํ•œ ๋ฐ”๋‹ค๋ฅผ ๊ฑท๊ณ  ์žˆ๋Š” ๋ชจ์Šต์„ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ณ ์–‘์ด๋Š” ํ•ด๋ณ€์—์„œ๋ถ€ํ„ฐ ๋ฐ”๋‹ค๋กœ ๊ฑธ์–ด๋“ค์–ด๊ฐ€๋Š” ์ค‘์ด๋ฉฐ, ์ฃผ๋ณ€์—๋Š” ์ž”์ž”ํ•œ ํŒŒ๋„๊ฐ€ ๋ฐ€๋ ค์˜ค๋Š” ๋ชจ์Šต์„ ๋ณด์—ฌ์ฃผ๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ณ ์–‘์ด๋Š” ํƒœ์–ด๋‚  ๋•Œ๋ถ€ํ„ฐ ๊ณ ์–‘์ด์™€ ๊ฐ•์•„์ง€์™€๋Š” ๋‹ค๋ฅด๊ฒŒ ๋ฐ”๋‹ค๋ฅผ ๊ฒฝํ—˜ํ•˜๊ณ , ์ ์‘ํ•ด๊ฐ€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๊ณ ์–‘์ด๋Š” ๋ฐ”๋‹ค๋ฅผ ์ข‹์•„ํ•˜๊ณ , ์ด ํ™˜๊ฒฝ์—์„œ ํ–‰๋ณต์„ ๋А๋ผ๋Š” ๊ฒƒ ๊ฐ™์Šต๋‹ˆ๋‹ค. ์ด ๊ณ ์–‘์ด๋Š” ์ธ๊ฐ„์ด ์•„๋‹Œ ์ž์—ฐ์˜ ์ผ๋ถ€๋กœ์จ ์ด ํ™˜๊ฒฝ์—์„œ ์‚ด์•„๊ฐ€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค.<|eot_id|> # --- Example Output [v2, Model diff based merging] --- user<|end_header_id|> <image> ์ด ์ด๋ฏธ์ง€์— ๋Œ€ํ•ด์„œ ์„ค๋ช…ํ•ด์ฃผ์„ธ์š”.<|eot_id|><|start_header_id|>assistant<|end_header_id|> ์ด ์ด๋ฏธ์ง€์—๋Š” ํ•œ๊ตญ์–ด ์ž๋ง‰๊ณผ ํ•จ๊ป˜ ๊ณ ์–‘์ด๊ฐ€ ๋ฌผ์— ๋ฐœ์„ ๋””๋””๊ณ  ๊ฑท๋Š” ๋ชจ์Šต์ด ๋‹ด๊ฒจ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ณ ์–‘์ด๋Š” ์˜ค๋ฅธ์ชฝ ๋ฐœ์„ ๋ฌผ์— ๋‹ด๊ทธ๊ณ  ๊ฑท๋Š” ์ค‘์ด๋ฉฐ, ํ•œ๊ตญ์–ด ์ž๋ง‰์€ "๊ณ ์–‘์ด๋Š” ๋ฌผ์„ ์ข‹์•„ํ•ฉ๋‹ˆ๋‹ค"๋ผ๋Š” ๋ฌธ์žฅ์„ ํฌํ•จํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์ž๋ง‰์€ ๊ณ ์–‘์ด๊ฐ€ ๋ฌผ์„ ์ข‹์•„ํ•˜๋Š” ๊ฒƒ์„ ๊ฐ•์กฐํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค.<|eot_id|> ```
{"language": ["ko", "en"], "license": "cc-by-nc-sa-4.0", "library_name": "transformers"}
beomi/Llama-3-KoEn-8B-xtuner-llava-preview
null
[ "transformers", "safetensors", "llava", "pretraining", "ko", "en", "arxiv:2310.04799", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T03:21:54+00:00
text-generation
transformers
# Llama-3-OpenBioMed-8B-slerp-v0.2 Llama-3-OpenBioMed-8B-slerp-v0.2 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [aaditya/Llama3-OpenBioLLM-8B](https://huggingface.co/aaditya/Llama3-OpenBioLLM-8B) * [johnsnowlabs/JSL-MedLlama-3-8B-v2.0](https://huggingface.co/johnsnowlabs/JSL-MedLlama-3-8B-v2.0) ## ๐Ÿงฉ Configuration ```yaml slices: - sources: - model: aaditya/Llama3-OpenBioLLM-8B layer_range: [0, 32] - model: johnsnowlabs/JSL-MedLlama-3-8B-v2.0 layer_range: [0, 32] merge_method: slerp base_model: johnsnowlabs/JSL-MedLlama-3-8B-v2.0 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## ๐Ÿ’ป Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "abhinand/Llama-3-OpenBioMed-8B-slerp-v0.2" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"tags": ["merge", "mergekit", "lazymergekit", "aaditya/Llama3-OpenBioLLM-8B", "johnsnowlabs/JSL-MedLlama-3-8B-v2.0"], "base_model": ["aaditya/Llama3-OpenBioLLM-8B", "johnsnowlabs/JSL-MedLlama-3-8B-v2.0"]}
abhinand/Llama-3-OpenBioMed-8B-slerp-v0.2
null
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "aaditya/Llama3-OpenBioLLM-8B", "johnsnowlabs/JSL-MedLlama-3-8B-v2.0", "base_model:aaditya/Llama3-OpenBioLLM-8B", "base_model:johnsnowlabs/JSL-MedLlama-3-8B-v2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T03:22:03+00:00
null
null
{}
syannn/spammer
null
[ "region:us" ]
null
2024-05-02T03:22:53+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
TinyPixel/llama-3-adapter3
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-02T03:23:03+00:00
text-generation
null
## Llamacpp imatrix Quantizations of Hermes-2-Pro-Llama-3-8B Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2777">b2777</a> for quantization. Original model: https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) ## Prompt format ``` <|im_start|>system You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|> <|im_start|>user Hello, who are you?<|im_end|> <|im_start|>assistant ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [Hermes-2-Pro-Llama-3-8B-Q8_0.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. | | [Hermes-2-Pro-Llama-3-8B-Q6_K.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. | | [Hermes-2-Pro-Llama-3-8B-Q5_K_M.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. | | [Hermes-2-Pro-Llama-3-8B-Q5_K_S.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. | | [Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. | | [Hermes-2-Pro-Llama-3-8B-Q4_K_S.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. | | [Hermes-2-Pro-Llama-3-8B-IQ4_NL.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. | | [Hermes-2-Pro-Llama-3-8B-IQ4_XS.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [Hermes-2-Pro-Llama-3-8B-Q3_K_L.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. | | [Hermes-2-Pro-Llama-3-8B-Q3_K_M.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. | | [Hermes-2-Pro-Llama-3-8B-IQ3_M.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [Hermes-2-Pro-Llama-3-8B-IQ3_S.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. | | [Hermes-2-Pro-Llama-3-8B-Q3_K_S.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. | | [Hermes-2-Pro-Llama-3-8B-IQ3_XS.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [Hermes-2-Pro-Llama-3-8B-IQ3_XXS.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. | | [Hermes-2-Pro-Llama-3-8B-Q2_K.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. | | [Hermes-2-Pro-Llama-3-8B-IQ2_M.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. | | [Hermes-2-Pro-Llama-3-8B-IQ2_S.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. | | [Hermes-2-Pro-Llama-3-8B-IQ2_XS.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. | | [Hermes-2-Pro-Llama-3-8B-IQ2_XXS.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. | | [Hermes-2-Pro-Llama-3-8B-IQ1_M.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. | | [Hermes-2-Pro-Llama-3-8B-IQ1_S.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. | ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
{"language": ["en"], "license": "apache-2.0", "tags": ["Llama-3", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "axolotl"], "datasets": ["teknium/OpenHermes-2.5"], "base_model": "NousResearch/Meta-Llama-3-8B", "widget": [{"example_title": "Hermes 2 Pro", "messages": [{"role": "system", "content": "You are a sentient, superintelligent artificial general intelligence, here to teach and assist me."}, {"role": "user", "content": "Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world."}]}], "quantized_by": "bartowski", "pipeline_tag": "text-generation", "model-index": [{"name": "Hermes-2-Pro-Llama-3-8B", "results": []}]}
bartowski/Hermes-2-Pro-Llama-3-8B-GGUF
null
[ "gguf", "Llama-3", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "axolotl", "text-generation", "en", "dataset:teknium/OpenHermes-2.5", "base_model:NousResearch/Meta-Llama-3-8B", "license:apache-2.0", "region:us" ]
null
2024-05-02T03:23:42+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
ryanyeo/kirnect-Llama-3-Open-Ko-8B-Instruct-preview-0501-5150step-8batch_5epoch
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-02T03:24:02+00:00
null
null
{}
ppxscal/cv_project
null
[ "region:us" ]
null
2024-05-02T03:27:32+00:00