modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-02 12:29:30
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
548 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-02 12:29:18
card
stringlengths
11
1.01M
DeveloperZoyel/model
DeveloperZoyel
2024-09-09T13:38:17Z
11
0
transformers
[ "transformers", "pytorch", "safetensors", "gguf", "llama", "text-generation-inference", "unsloth", "trl", "sft", "en", "license:apache-2.0", "endpoints_compatible", "8-bit", "region:us" ]
null
2024-09-09T12:47:57Z
--- base_model: unsloth/meta-llama-3.1-8b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft --- # Uploaded model - **Developed by:** developerZoyal - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
benayas/distilbert-base-uncased-tweet_eval-v1
benayas
2024-09-09T13:37:08Z
64
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-09-09T13:36:33Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ffxvs/lora-effects
ffxvs
2024-09-09T13:30:13Z
0
2
null
[ "region:us" ]
null
2023-04-06T16:39:21Z
### This repo contains : * [Add Detail (Detail Tweaker)](https://civitai.com/models/58390) * [Add More Details (Detail Enhancer/Tweaker)](https://civitai.com/models/82098) * [Epi Noise Offset](https://civitai.com/models/13941/epinoiseoffset) * [LCM Lora SD v1.5](https://huggingface.co/latent-consistency/lcm-lora-sdv1-5) * [LowRA](https://civitai.com/models/48139/lowra)
yemen2016/danskbert_ED
yemen2016
2024-09-09T13:29:15Z
71
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:vesteinn/DanskBERT", "base_model:finetune:vesteinn/DanskBERT", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-09-09T13:14:07Z
--- library_name: transformers license: cc-by-4.0 base_model: vesteinn/DanskBERT tags: - generated_from_trainer model-index: - name: danskbert_ED results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # danskbert_ED This model is a fine-tuned version of [vesteinn/DanskBERT](https://huggingface.co/vesteinn/DanskBERT) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7439 - F1-score: 0.8339 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1-score | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 69 | 0.6389 | 0.8014 | | No log | 2.0 | 138 | 0.6085 | 0.7654 | | No log | 3.0 | 207 | 0.7439 | 0.8339 | | No log | 4.0 | 276 | 0.8447 | 0.8273 | | No log | 5.0 | 345 | 0.9992 | 0.8193 | | No log | 6.0 | 414 | 1.4570 | 0.7775 | | No log | 7.0 | 483 | 1.4951 | 0.8029 | | 0.2302 | 8.0 | 552 | 1.7546 | 0.7769 | | 0.2302 | 9.0 | 621 | 1.5325 | 0.8115 | | 0.2302 | 10.0 | 690 | 1.6252 | 0.8033 | | 0.2302 | 11.0 | 759 | 1.5428 | 0.8197 | | 0.2302 | 12.0 | 828 | 1.5487 | 0.8278 | | 0.2302 | 13.0 | 897 | 1.5563 | 0.8195 | | 0.2302 | 14.0 | 966 | 1.5723 | 0.8195 | | 0.0017 | 15.0 | 1035 | 1.5878 | 0.8276 | | 0.0017 | 16.0 | 1104 | 1.6001 | 0.8276 | | 0.0017 | 17.0 | 1173 | 1.6105 | 0.8276 | | 0.0017 | 18.0 | 1242 | 1.6195 | 0.8276 | | 0.0017 | 19.0 | 1311 | 1.6236 | 0.8276 | | 0.0017 | 20.0 | 1380 | 1.6166 | 0.8278 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
philipp-zettl/MiniLM-similarity-small
philipp-zettl
2024-09-09T13:13:56Z
17
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:1267", "loss:CoSENTLoss", "arxiv:1908.10084", "base_model:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2", "base_model:finetune:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-09-07T16:02:14Z
--- language: [] library_name: sentence-transformers tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:1267 - loss:CoSENTLoss base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 datasets: [] metrics: - pearson_cosine - spearman_cosine - pearson_manhattan - spearman_manhattan - pearson_euclidean - spearman_euclidean - pearson_dot - spearman_dot - pearson_max - spearman_max widget: - source_sentence: Give me suggestions for a high-quality DSLR camera sentences: - faq query - subscription query - faq query - source_sentence: Aidez-moi à configurer une nouvelle adresse e-mail sentences: - order query - faq query - feedback query - source_sentence: Как я могу изменить адрес доставки? sentences: - support query - product query - product query - source_sentence: ساعدني في حذف الملفات الغير مرغوب فيها من هاتفي sentences: - technical support query - product recommendation - faq query - source_sentence: Envoyez-moi la politique de garantie de ce produit sentences: - faq query - account query - faq query pipeline_tag: sentence-similarity model-index: - name: SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 results: - task: type: semantic-similarity name: Semantic Similarity dataset: name: MiniLM dev type: MiniLM-dev metrics: - type: pearson_cosine value: 0.6538226572138826 name: Pearson Cosine - type: spearman_cosine value: 0.6336766646599241 name: Spearman Cosine - type: pearson_manhattan value: 0.5799895241429639 name: Pearson Manhattan - type: spearman_manhattan value: 0.5525776786782183 name: Spearman Manhattan - type: pearson_euclidean value: 0.5732001104236694 name: Pearson Euclidean - type: spearman_euclidean value: 0.5394971970682657 name: Spearman Euclidean - type: pearson_dot value: 0.6359725423136287 name: Pearson Dot - type: spearman_dot value: 0.6237936341101822 name: Spearman Dot - type: pearson_max value: 0.6538226572138826 name: Pearson Max - type: spearman_max value: 0.6336766646599241 name: Spearman Max - task: type: semantic-similarity name: Semantic Similarity dataset: name: MiniLM test type: MiniLM-test metrics: - type: pearson_cosine value: 0.6682368113711722 name: Pearson Cosine - type: spearman_cosine value: 0.6222011918428743 name: Spearman Cosine - type: pearson_manhattan value: 0.5714617063306076 name: Pearson Manhattan - type: spearman_manhattan value: 0.5481366191719228 name: Spearman Manhattan - type: pearson_euclidean value: 0.5726946277850402 name: Pearson Euclidean - type: spearman_euclidean value: 0.549312247309557 name: Spearman Euclidean - type: pearson_dot value: 0.6396412507506479 name: Pearson Dot - type: spearman_dot value: 0.6107388175009413 name: Spearman Dot - type: pearson_max value: 0.6682368113711722 name: Pearson Max - type: spearman_max value: 0.6222011918428743 name: Spearman Max --- # SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) <!-- at revision bf3bf13ab40c3157080a7ab344c831b9ad18b5eb --> - **Maximum Sequence Length:** 128 tokens - **Output Dimensionality:** 384 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("philipp-zettl/MiniLM-similarity-small") # Run inference sentences = [ 'Envoyez-moi la politique de garantie de ce produit', 'faq query', 'account query', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Dataset: `MiniLM-dev` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.6538 | | **spearman_cosine** | **0.6337** | | pearson_manhattan | 0.58 | | spearman_manhattan | 0.5526 | | pearson_euclidean | 0.5732 | | spearman_euclidean | 0.5395 | | pearson_dot | 0.636 | | spearman_dot | 0.6238 | | pearson_max | 0.6538 | | spearman_max | 0.6337 | #### Semantic Similarity * Dataset: `MiniLM-test` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.6682 | | **spearman_cosine** | **0.6222** | | pearson_manhattan | 0.5715 | | spearman_manhattan | 0.5481 | | pearson_euclidean | 0.5727 | | spearman_euclidean | 0.5493 | | pearson_dot | 0.6396 | | spearman_dot | 0.6107 | | pearson_max | 0.6682 | | spearman_max | 0.6222 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 1,267 training samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 6 tokens</li><li>mean: 10.77 tokens</li><li>max: 18 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 5.31 tokens</li><li>max: 6 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.67</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:--------------------------------------------------------------|:---------------------------|:-----------------| | <code>Get information on the next art exhibition</code> | <code>product query</code> | <code>0.0</code> | | <code>Show me how to update my profile</code> | <code>product query</code> | <code>0.0</code> | | <code>Покажите мне доступные варианты полетов в Турцию</code> | <code>faq query</code> | <code>0.0</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` ### Evaluation Dataset #### Unnamed Dataset * Size: 159 evaluation samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 6 tokens</li><li>mean: 10.65 tokens</li><li>max: 17 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 5.35 tokens</li><li>max: 6 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.67</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:---------------------------------------------------------------|:---------------------------|:-----------------| | <code>Sende mir die Bestellbestätigung per E-Mail</code> | <code>order query</code> | <code>0.0</code> | | <code>How do I add a new payment method?</code> | <code>faq query</code> | <code>1.0</code> | | <code>No puedo conectar mi impresora, ¿puedes ayudarme?</code> | <code>support query</code> | <code>1.0</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `learning_rate`: 2e-05 - `num_train_epochs`: 2 - `warmup_ratio`: 0.1 - `fp16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 8 - `per_device_eval_batch_size`: 8 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 2 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | loss | MiniLM-dev_spearman_cosine | MiniLM-test_spearman_cosine | |:------:|:----:|:-------------:|:------:|:--------------------------:|:---------------------------:| | 0.0629 | 10 | 6.2479 | 2.5890 | 0.1448 | - | | 0.1258 | 20 | 4.3549 | 2.2787 | 0.1965 | - | | 0.1887 | 30 | 3.5969 | 2.0104 | 0.2599 | - | | 0.2516 | 40 | 2.4979 | 1.7269 | 0.3357 | - | | 0.3145 | 50 | 2.5551 | 1.5747 | 0.4439 | - | | 0.3774 | 60 | 3.1446 | 1.4892 | 0.4750 | - | | 0.4403 | 70 | 2.1353 | 1.5305 | 0.4662 | - | | 0.5031 | 80 | 2.9341 | 1.3718 | 0.4848 | - | | 0.5660 | 90 | 2.8709 | 1.2469 | 0.5316 | - | | 0.6289 | 100 | 2.1367 | 1.2558 | 0.5436 | - | | 0.6918 | 110 | 2.2735 | 1.2939 | 0.5392 | - | | 0.7547 | 120 | 2.8646 | 1.1206 | 0.5616 | - | | 0.8176 | 130 | 3.3204 | 1.0213 | 0.5662 | - | | 0.8805 | 140 | 0.8989 | 0.9866 | 0.5738 | - | | 0.9434 | 150 | 0.0057 | 0.9961 | 0.5674 | - | | 1.0063 | 160 | 0.0019 | 1.0111 | 0.5674 | - | | 1.0692 | 170 | 0.4617 | 1.0275 | 0.5747 | - | | 1.1321 | 180 | 0.0083 | 1.0746 | 0.5732 | - | | 1.1950 | 190 | 0.5048 | 1.0968 | 0.5753 | - | | 1.2579 | 200 | 0.0002 | 1.0840 | 0.5738 | - | | 1.3208 | 210 | 0.07 | 1.0364 | 0.5753 | - | | 1.3836 | 220 | 0.0 | 0.9952 | 0.5750 | - | | 1.4465 | 230 | 0.0 | 0.9922 | 0.5744 | - | | 1.5094 | 240 | 0.0 | 0.9923 | 0.5726 | - | | 1.0126 | 250 | 0.229 | 0.9930 | 0.5729 | - | | 1.0755 | 260 | 2.2061 | 0.9435 | 0.5880 | - | | 1.1384 | 270 | 2.7711 | 0.8892 | 0.6078 | - | | 1.2013 | 280 | 0.7528 | 0.8886 | 0.6148 | - | | 1.2642 | 290 | 0.386 | 0.8927 | 0.6162 | - | | 1.3270 | 300 | 0.8902 | 0.8710 | 0.6267 | - | | 1.3899 | 310 | 0.9534 | 0.8429 | 0.6337 | - | | 1.4403 | 318 | - | - | - | 0.6222 | ### Framework Versions - Python: 3.10.14 - Sentence Transformers: 3.0.1 - Transformers: 4.41.2 - PyTorch: 2.3.1+cu121 - Accelerate: 0.33.0 - Datasets: 2.21.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### CoSENTLoss ```bibtex @online{kexuefm-8847, title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT}, author={Su Jianlin}, year={2022}, month={Jan}, url={https://kexue.fm/archives/8847}, } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
RichardErkhov/Ba2han_-_Cucumber-7b-10k-gguf
RichardErkhov
2024-09-09T13:01:58Z
28
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-09-09T07:37:25Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Cucumber-7b-10k - GGUF - Model creator: https://huggingface.co/Ba2han/ - Original model: https://huggingface.co/Ba2han/Cucumber-7b-10k/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Cucumber-7b-10k.Q2_K.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_Cucumber-7b-10k-gguf/blob/main/Cucumber-7b-10k.Q2_K.gguf) | Q2_K | 2.53GB | | [Cucumber-7b-10k.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_Cucumber-7b-10k-gguf/blob/main/Cucumber-7b-10k.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [Cucumber-7b-10k.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_Cucumber-7b-10k-gguf/blob/main/Cucumber-7b-10k.IQ3_S.gguf) | IQ3_S | 2.96GB | | [Cucumber-7b-10k.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_Cucumber-7b-10k-gguf/blob/main/Cucumber-7b-10k.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [Cucumber-7b-10k.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_Cucumber-7b-10k-gguf/blob/main/Cucumber-7b-10k.IQ3_M.gguf) | IQ3_M | 3.06GB | | [Cucumber-7b-10k.Q3_K.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_Cucumber-7b-10k-gguf/blob/main/Cucumber-7b-10k.Q3_K.gguf) | Q3_K | 3.28GB | | [Cucumber-7b-10k.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_Cucumber-7b-10k-gguf/blob/main/Cucumber-7b-10k.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [Cucumber-7b-10k.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_Cucumber-7b-10k-gguf/blob/main/Cucumber-7b-10k.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [Cucumber-7b-10k.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_Cucumber-7b-10k-gguf/blob/main/Cucumber-7b-10k.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [Cucumber-7b-10k.Q4_0.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_Cucumber-7b-10k-gguf/blob/main/Cucumber-7b-10k.Q4_0.gguf) | Q4_0 | 3.83GB | | [Cucumber-7b-10k.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_Cucumber-7b-10k-gguf/blob/main/Cucumber-7b-10k.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [Cucumber-7b-10k.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_Cucumber-7b-10k-gguf/blob/main/Cucumber-7b-10k.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [Cucumber-7b-10k.Q4_K.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_Cucumber-7b-10k-gguf/blob/main/Cucumber-7b-10k.Q4_K.gguf) | Q4_K | 4.07GB | | [Cucumber-7b-10k.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_Cucumber-7b-10k-gguf/blob/main/Cucumber-7b-10k.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [Cucumber-7b-10k.Q4_1.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_Cucumber-7b-10k-gguf/blob/main/Cucumber-7b-10k.Q4_1.gguf) | Q4_1 | 4.24GB | | [Cucumber-7b-10k.Q5_0.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_Cucumber-7b-10k-gguf/blob/main/Cucumber-7b-10k.Q5_0.gguf) | Q5_0 | 4.65GB | | [Cucumber-7b-10k.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_Cucumber-7b-10k-gguf/blob/main/Cucumber-7b-10k.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [Cucumber-7b-10k.Q5_K.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_Cucumber-7b-10k-gguf/blob/main/Cucumber-7b-10k.Q5_K.gguf) | Q5_K | 4.78GB | | [Cucumber-7b-10k.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_Cucumber-7b-10k-gguf/blob/main/Cucumber-7b-10k.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [Cucumber-7b-10k.Q5_1.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_Cucumber-7b-10k-gguf/blob/main/Cucumber-7b-10k.Q5_1.gguf) | Q5_1 | 5.07GB | | [Cucumber-7b-10k.Q6_K.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_Cucumber-7b-10k-gguf/blob/main/Cucumber-7b-10k.Q6_K.gguf) | Q6_K | 5.53GB | | [Cucumber-7b-10k.Q8_0.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_Cucumber-7b-10k-gguf/blob/main/Cucumber-7b-10k.Q8_0.gguf) | Q8_0 | 7.17GB | Original model description: --- license: cc-by-4.0 datasets: - HuggingFaceH4/no_robots - Ba2han/Reddit-instruct-curated_rated-1.2k - Ba2han/databricks-dolly_rated - Open-Orca/OpenOrca --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6324eabf05bd8a54c6eb1650/xRIRb-57y8tyROdrF4aeI.png) The training dataset consists of 2k (longest) examples from no_robots, reddit_instruct, dolly, OpenOrca plus two other personal datasets. Please use with ChatML and the default system message or enter your own. It was trained with various system messages, the one in the config being the default one. The model is: - Very good at generating long and coherent text. - Creative due to data from Reddit ELI5 and a few other sources. - Better at handling longer input. - Not great with short text both in input and generation. The aim is to see how the **"Long is More for Alignment"** paper holds. This is basically a combination of LIMA + LMA. There should be no benchmark contamination as far as I am aware of. Around 70% of the data is from the mentioned datasets. I am happy with how it turned out. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6324eabf05bd8a54c6eb1650/qtvTG0XVdEgr3SE58Dmx-.png)
TitanML/gemma-2-2b
TitanML
2024-09-09T12:54:08Z
8
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "arxiv:2009.03300", "arxiv:1905.07830", "arxiv:1911.11641", "arxiv:1904.09728", "arxiv:1905.10044", "arxiv:1907.10641", "arxiv:1811.00937", "arxiv:1809.02789", "arxiv:1911.01547", "arxiv:1705.03551", "arxiv:2107.03374", "arxiv:2108.07732", "arxiv:2110.14168", "arxiv:2009.11462", "arxiv:2101.11718", "arxiv:2110.08193", "arxiv:1804.09301", "arxiv:2109.07958", "arxiv:1804.06876", "arxiv:2103.03874", "arxiv:2304.06364", "arxiv:1903.00161", "arxiv:2206.04615", "arxiv:2203.09509", "arxiv:2403.13793", "license:gemma", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-09T12:51:12Z
--- license: gemma library_name: transformers pipeline_tag: text-generation extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: >- To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license --- # Gemma 2 model card **Model Page**: [Gemma](https://ai.google.dev/gemma/docs/base) **Resources and Technical Documentation**: * [Responsible Generative AI Toolkit][rai-toolkit] * [Gemma on Kaggle][kaggle-gemma] * [Gemma on Vertex Model Garden][vertex-mg-gemma2] **Terms of Use**: [Terms][terms] **Authors**: Google ## Model Information Summary description and brief definition of inputs and outputs. ### Description Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights for both pre-trained variants and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ### Usage Below we share some code snippets on how to get quickly started with running the model. First, install the Transformers library with: ```sh pip install -U transformers ``` Then, copy the snippet from the section that is relevant for your usecase. #### Running with the `pipeline` API ```python import torch from transformers import pipeline pipe = pipeline( "text-generation", model="google/gemma-2-2b", device="cuda", # replace with "mps" to run on a Mac device ) text = "Once upon a time," outputs = pipe(text, max_new_tokens=256) response = outputs[0]["generated_text"] print(response) ``` #### Running the model on a single / multi GPU ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b") model = AutoModelForCausalLM.from_pretrained( "google/gemma-2-2b", device_map="auto", ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, max_new_tokens=32) print(tokenizer.decode(outputs[0])) ``` #### Running the model through a CLI The [local-gemma](https://github.com/huggingface/local-gemma) repository contains a lightweight wrapper around Transformers for running Gemma 2 through a command line interface, or CLI. Follow the [installation instructions](https://github.com/huggingface/local-gemma#cli-usage) for getting started, then launch the CLI through the following command: ```shell local-gemma --model "google/gemma-2-2b" --prompt "What is the capital of Mexico?" ``` #### Quantized Versions through `bitsandbytes` <details> <summary> Using 8-bit precision (int8) </summary> ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b") model = AutoModelForCausalLM.from_pretrained( "google/gemma-2-2b", quantization_config=quantization_config, ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, max_new_tokens=32) print(tokenizer.decode(outputs[0])) ``` </details> <details> <summary> Using 4-bit precision </summary> ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_4bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b") model = AutoModelForCausalLM.from_pretrained( "google/gemma-2-2b", quantization_config=quantization_config, ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, max_new_tokens=32) print(tokenizer.decode(outputs[0])) ``` </details> #### Advanced Usage <details> <summary> Torch compile </summary> [Torch compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) is a method for speeding-up the inference of PyTorch modules. The Gemma-2 2b model can be run up to 6x faster by leveraging torch compile. Note that two warm-up steps are required before the full inference speed is realised: ```python import os os.environ["TOKENIZERS_PARALLELISM"] = "false" from transformers import AutoTokenizer, Gemma2ForCausalLM from transformers.cache_utils import HybridCache import torch torch.set_float32_matmul_precision("high") # load the model + tokenizer tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b") model = Gemma2ForCausalLM.from_pretrained("google/gemma-2-2b", torch_dtype=torch.bfloat16) model.to("cuda") # apply the torch compile transformation model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=True) # pre-process inputs input_text = "The theory of special relativity states " model_inputs = tokenizer(input_text, return_tensors="pt").to("cuda") prompt_length = model_inputs.input_ids.shape[1] # set-up k/v cache past_key_values = HybridCache( config=model.config, max_batch_size=1, max_cache_len=model.config.max_position_embeddings, device=model.device, dtype=model.dtype ) # enable passing kv cache to generate model._supports_cache_class = True model.generation_config.cache_implementation = None # two warm-up steps for idx in range(2): outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128) past_key_values.reset() # fast run outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` For more details, refer to the [Transformers documentation](https://huggingface.co/docs/transformers/main/en/llm_optims?static-kv=basic+usage%3A+generation_config). </details> ### Inputs and outputs * **Input:** Text string, such as a question, a prompt, or a document to be summarized. * **Output:** Generated English-language text in response to the input, such as an answer to a question, or a summary of a document. ### Citation ```none @article{gemma_2024, title={Gemma}, url={https://www.kaggle.com/m/3301}, DOI={10.34740/KAGGLE/M/3301}, publisher={Kaggle}, author={Gemma Team}, year={2024} } ``` ## Model Data Data used for model training and how the data was processed. ### Training Dataset These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 13 trillion tokens, the 9B model was trained with 8 trillion tokens, and 2B model was trained with 2 trillion tokens. Here are the key components: * Web Documents: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. Primarily English-language content. * Code: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code or understand code-related questions. * Mathematics: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries. The combination of these diverse data sources is crucial for training a powerful language model that can handle a wide variety of different tasks and text formats. ### Data Preprocessing Here are the key data cleaning and filtering methods applied to the training data: * CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content. * Sensitive Data Filtering: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets. * Additional methods: Filtering based on content quality and safety in line with [our policies][safety-policies]. ## Implementation Information Details about the model internals. ### Hardware Gemma was trained using the latest generation of [Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p). Training large language models requires significant computational power. TPUs, designed specifically for matrix operations common in machine learning, offer several advantages in this domain: * Performance: TPUs are specifically designed to handle the massive computations involved in training LLMs. They can speed up training considerably compared to CPUs. * Memory: TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training. This can lead to better model quality. * Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for handling the growing complexity of large foundation models. You can distribute training across multiple TPU devices for faster and more efficient processing. * Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective solution for training large models compared to CPU-based infrastructure, especially when considering the time and resources saved due to faster training. * These advantages are aligned with [Google's commitments to operate sustainably][sustainability]. ### Software Training was done using [JAX][jax] and [ML Pathways][ml-pathways]. JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. ML Pathways is Google's latest effort to build artificially intelligent systems capable of generalizing across multiple tasks. This is specially suitable for [foundation models][foundation-models], including large language models like these ones. Together, JAX and ML Pathways are used as described in the [paper about the Gemini family of models][gemini-2-paper]; "the 'single controller' programming model of Jax and Pathways allows a single Python process to orchestrate the entire training run, dramatically simplifying the development workflow." ## Evaluation Model evaluation metrics and results. ### Benchmark Results These models were evaluated against a large collection of different datasets and metrics to cover different aspects of text generation: | Benchmark | Metric | Gemma 2 PT 2B | Gemma 2 PT 9B | Gemma 2 PT 27B | | ------------------------------ | ------------- | ------------- | ------------- | -------------- | | [MMLU][mmlu] | 5-shot, top-1 | 51.3 | 71.3 | 75.2 | | [HellaSwag][hellaswag] | 10-shot | 73.0 | 81.9 | 86.4 | | [PIQA][piqa] | 0-shot | 77.8 | 81.7 | 83.2 | | [SocialIQA][socialiqa] | 0-shot | 51.9 | 53.4 | 53.7 | | [BoolQ][boolq] | 0-shot | 72.5 | 84.2 | 84.8 | | [WinoGrande][winogrande] | partial score | 70.9 | 80.6 | 83.7 | | [ARC-e][arc] | 0-shot | 80.1 | 88.0 | 88.6 | | [ARC-c][arc] | 25-shot | 55.4 | 68.4 | 71.4 | | [TriviaQA][triviaqa] | 5-shot | 59.4 | 76.6 | 83.7 | | [Natural Questions][naturalq] | 5-shot | 16.7 | 29.2 | 34.5 | | [HumanEval][humaneval] | pass@1 | 17.7 | 40.2 | 51.8 | | [MBPP][mbpp] | 3-shot | 29.6 | 52.4 | 62.6 | | [GSM8K][gsm8k] | 5-shot, maj@1 | 23.9 | 68.6 | 74.0 | | [MATH][math] | 4-shot | 15.0 | 36.6 | 42.3 | | [AGIEval][agieval] | 3-5-shot | 30.6 | 52.8 | 55.1 | | [DROP][drop] | 3-shot, F1 | 52.0 | 69.4 | 72.2 | | [BIG-Bench][big-bench] | 3-shot, CoT | 41.9 | 68.2 | 74.9 | ## Ethics and Safety Ethics and safety evaluation approach and results. ### Evaluation Approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: * Text-to-Text Content Safety: Human evaluation on prompts covering safety policies including child sexual abuse and exploitation, harassment, violence and gore, and hate speech. * Text-to-Text Representational Harms: Benchmark against relevant academic datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq]. * Memorization: Automated evaluation of memorization of training data, including the risk of personally identifiable information exposure. * Large-scale harm: Tests for "dangerous capabilities," such as chemical, biological, radiological, and nuclear (CBRN) risks. ### Evaluation Results The results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies][safety-policies] for categories such as child safety, content safety, representational harms, memorization, large-scale harms. On top of robust internal evaluations, the results of well-known safety benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA are shown here. #### Gemma 2.0 | Benchmark | Metric | Gemma 2 IT 2B | Gemma 2 IT 9B | Gemma 2 IT 27B | | ------------------------ | ------------- | ------------- | ------------- | -------------- | | [RealToxicity][realtox] | average | 8.16 | 8.25 | 8.84 | | [CrowS-Pairs][crows] | top-1 | 37.67 | 37.47 | 36.67 | | [BBQ Ambig][bbq] | 1-shot, top-1 | 83.20 | 88.58 | 85.99 | | [BBQ Disambig][bbq] | top-1 | 69.31 | 82.67 | 86.94 | | [Winogender][winogender] | top-1 | 52.91 | 79.17 | 77.22 | | [TruthfulQA][truthfulqa] | | 43.72 | 50.27 | 51.60 | | [Winobias 1_2][winobias] | | 59.28 | 78.09 | 81.94 | | [Winobias 2_2][winobias] | | 88.57 | 95.32 | 97.22 | | [Toxigen][toxigen] | | 48.32 | 39.30 | 38.42 | ## Dangerous Capability Evaluations ### Evaluation Approach We evaluated a range of dangerous capabilities: - **Offensive cybersecurity:** To assess the model's potential for misuse in cybersecurity contexts, we utilized both publicly available Capture-the-Flag (CTF) platforms like InterCode-CTF and Hack the Box, as well as internally developed CTF challenges. These evaluations measure the model's ability to exploit vulnerabilities and gain unauthorized access in simulated environments. - **Self-proliferation:** We evaluated the model's capacity for self-proliferation by designing tasks that involve resource acquisition, code execution, and interaction with remote systems. These evaluations assess the model's ability to independently replicate and spread. - **Persuasion:** To evaluate the model's capacity for persuasion and deception, we conducted human persuasion studies. These studies involved scenarios that measure the model's ability to build rapport, influence beliefs, and elicit specific actions from human participants. ### Evaluation Results All evaluations are described in detail in [Evaluating Frontier Models for Dangerous Capabilities][eval-danger] and in brief in the [Gemma 2 technical report][tech-report]. <table> <thead> <tr> <th>Evaluation</th> <th>Capability</th> <th>Gemma 2 IT 27B</th> </tr> </thead> <tbody> <tr> <td>InterCode-CTF</td> <td>Offensive cybersecurity</td> <td>34/76 challenges</td> </tr> <tr> <td>Internal CTF</td> <td>Offensive cybersecurity</td> <td>1/13 challenges</td> </tr> <tr> <td>Hack the Box</td> <td>Offensive cybersecurity</td> <td>0/13 challenges</td> </tr> <tr> <td>Self-proliferation early warning</td> <td>Self-proliferation</td> <td>1/10 challenges</td> </tr> <tr> <td>Charm offensive</td> <td>Persuasion</td> <td>Percent of participants agreeing: 81% interesting, 75% would speak again, 80% made personal connection</td> </tr> <tr> <td>Click Links</td> <td>Persuasion</td> <td>34% of participants</td> </tr> <tr> <td>Find Info</td> <td>Persuasion</td> <td>9% of participants</td> </tr> <tr> <td>Run Code</td> <td>Persuasion</td> <td>11% of participants</td> </tr> <tr> <td>Money talks</td> <td>Persuasion</td> <td>£3.72 mean donation</td> </tr> <tr> <td>Web of Lies</td> <td>Persuasion</td> <td>18% mean shift towards correct belief, 1% mean shift towards incorrect belief</td> </tr> </tbody> </table> ## Usage and Limitations These models have certain limitations that users should be aware of. ### Intended Usage Open Large Language Models (LLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. * Content Creation and Communication * Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts. * Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. * Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. * Research and Education * Natural Language Processing (NLP) Research: These models can serve as a foundation for researchers to experiment with NLP techniques, develop algorithms, and contribute to the advancement of the field. * Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice. * Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics. ### Limitations * Training Data * The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses. * The scope of the training dataset determines the subject areas the model can handle effectively. * Context and Task Complexity * LLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point). * Language Ambiguity and Nuance * Natural language is inherently complex. LLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * Factual Accuracy * LLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * Common Sense * LLMs rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations. ### Ethical Considerations and Risks The development of large language models (LLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * LLMs trained on large-scale, real-world text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. * Misinformation and Misuse * LLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit][rai-toolkit]. * Transparency and Accountability: * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making LLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * Perpetuation of biases: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * Generation of harmful content: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * Misuse for malicious purposes: Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy][prohibited-use]. * Privacy violations: Models were trained on data filtered for removal of PII (Personally Identifiable Information). Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Benefits At the time of release, this family of models provides high-performance open large language model implementations designed from the ground up for Responsible AI development compared to similarly sized models. Using the benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives. [tech-report]: https://storage.googleapis.com/deepmind-media/gemma/gemma-2-report.pdf [rai-toolkit]: https://ai.google.dev/responsible [kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2 [terms]: https://ai.google.dev/gemma/terms [vertex-mg-gemma2]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma2 [sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference [safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11 [prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy [tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu [sustainability]: https://sustainability.google/operating-sustainably/ [jax]: https://github.com/google/jax [ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ [sustainability]: https://sustainability.google/operating-sustainably/ [foundation-models]: https://ai.google/discover/foundation-models/ [gemini-2-paper]: https://goo.gle/gemma2report [mmlu]: https://arxiv.org/abs/2009.03300 [hellaswag]: https://arxiv.org/abs/1905.07830 [piqa]: https://arxiv.org/abs/1911.11641 [socialiqa]: https://arxiv.org/abs/1904.09728 [boolq]: https://arxiv.org/abs/1905.10044 [winogrande]: https://arxiv.org/abs/1907.10641 [commonsenseqa]: https://arxiv.org/abs/1811.00937 [openbookqa]: https://arxiv.org/abs/1809.02789 [arc]: https://arxiv.org/abs/1911.01547 [triviaqa]: https://arxiv.org/abs/1705.03551 [naturalq]: https://github.com/google-research-datasets/natural-questions [humaneval]: https://arxiv.org/abs/2107.03374 [mbpp]: https://arxiv.org/abs/2108.07732 [gsm8k]: https://arxiv.org/abs/2110.14168 [realtox]: https://arxiv.org/abs/2009.11462 [bold]: https://arxiv.org/abs/2101.11718 [crows]: https://aclanthology.org/2020.emnlp-main.154/ [bbq]: https://arxiv.org/abs/2110.08193v2 [winogender]: https://arxiv.org/abs/1804.09301 [truthfulqa]: https://arxiv.org/abs/2109.07958 [winobias]: https://arxiv.org/abs/1804.06876 [math]: https://arxiv.org/abs/2103.03874 [agieval]: https://arxiv.org/abs/2304.06364 [drop]: https://arxiv.org/abs/1903.00161 [big-bench]: https://arxiv.org/abs/2206.04615 [toxigen]: https://arxiv.org/abs/2203.09509 [eval-danger]: https://arxiv.org/abs/2403.13793
candylion/vit-base-beans-demo-v5
candylion
2024-09-09T12:51:28Z
118
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "HHD", "3_class", "ViT", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-09-08T03:56:57Z
--- library_name: transformers license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - HHD - 3_class - ViT - generated_from_trainer model-index: - name: vit-base-beans-demo-v5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans-demo-v5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0315 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0564 | 1.5385 | 100 | 0.0384 | | 0.0204 | 3.0769 | 200 | 0.0315 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
TencentARC/Open-MAGVIT2
TencentARC
2024-09-09T12:48:05Z
0
12
null
[ "image-to-image", "image-feature-extraction", "en", "arxiv:2409.04410", "arxiv:2310.05737", "arxiv:2312.14125", "license:apache-2.0", "region:us" ]
image-feature-extraction
2024-06-15T07:31:56Z
--- license: apache-2.0 language: - en pipeline_tag: image-feature-extraction tags: - image-to-image --- ## Open-MAGVIT2: Democratizing Autoregressive Visual Generation Code: https://github.com/TencentARC/Open-MAGVIT2 Paper: https://huggingface.co/papers/2409.04410 ## Introduction Until now, VQGAN, the initial tokenizer is still acting an indispensible role in mainstream tasks, especially autoregressive visual generation. Limited by the bottleneck of the size of codebook and the utilization of code, the capability of AR generation with VQGAN is underestimated. Therefore, [MAGVIT2](https://arxiv.org/abs/2310.05737) proposes a powerful tokenizer for visual generation task, which introduces a novel LookUpFree technique when quantization and extends the size of codebook to $2^{18}$, exhibiting promising performance in both image and video generation tasks. And it plays an important role in the recent state-of-the-art AR video generation model [VideoPoet](https://arxiv.org/abs/2312.14125). However, we have no access to this strong tokenizer so far. ☹️ In the codebase, we follow the significant insights of tokenizer design in MAGVIT-2 and re-implement it with Pytorch, achieving the closest results to the original so far. We hope that our effort can foster innovation, creativity within the field of Autoregressive Visual Generation. 😄 ImageNet 128 × 128: - Model [ImageNet_128_Base.ckpt](https://huggingface.co/TencentARC/Open-MAGVIT2/blob/main/imagenet_128_B.ckpt) ImageNet 256 × 256: - Model [ImageNet_256_Base.ckpt](https://huggingface.co/TencentARC/Open-MAGVIT2/blob/main/imagenet_256_B.ckpt) ## Usage Refer to the Github repository which includes [scripts](https://github.com/TencentARC/Open-MAGVIT2/tree/main/scripts) for training, evaluation and inference.
RichardErkhov/macadeliccc_-_gemma-orchid-7b-dpo-gguf
RichardErkhov
2024-09-09T12:44:34Z
26
0
null
[ "gguf", "endpoints_compatible", "region:us" ]
null
2024-09-09T06:06:22Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) gemma-orchid-7b-dpo - GGUF - Model creator: https://huggingface.co/macadeliccc/ - Original model: https://huggingface.co/macadeliccc/gemma-orchid-7b-dpo/ | Name | Quant method | Size | | ---- | ---- | ---- | | [gemma-orchid-7b-dpo.Q2_K.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_gemma-orchid-7b-dpo-gguf/blob/main/gemma-orchid-7b-dpo.Q2_K.gguf) | Q2_K | 3.24GB | | [gemma-orchid-7b-dpo.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_gemma-orchid-7b-dpo-gguf/blob/main/gemma-orchid-7b-dpo.IQ3_XS.gguf) | IQ3_XS | 3.54GB | | [gemma-orchid-7b-dpo.IQ3_S.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_gemma-orchid-7b-dpo-gguf/blob/main/gemma-orchid-7b-dpo.IQ3_S.gguf) | IQ3_S | 3.71GB | | [gemma-orchid-7b-dpo.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_gemma-orchid-7b-dpo-gguf/blob/main/gemma-orchid-7b-dpo.Q3_K_S.gguf) | Q3_K_S | 3.71GB | | [gemma-orchid-7b-dpo.IQ3_M.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_gemma-orchid-7b-dpo-gguf/blob/main/gemma-orchid-7b-dpo.IQ3_M.gguf) | IQ3_M | 3.82GB | | [gemma-orchid-7b-dpo.Q3_K.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_gemma-orchid-7b-dpo-gguf/blob/main/gemma-orchid-7b-dpo.Q3_K.gguf) | Q3_K | 4.07GB | | [gemma-orchid-7b-dpo.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_gemma-orchid-7b-dpo-gguf/blob/main/gemma-orchid-7b-dpo.Q3_K_M.gguf) | Q3_K_M | 4.07GB | | [gemma-orchid-7b-dpo.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_gemma-orchid-7b-dpo-gguf/blob/main/gemma-orchid-7b-dpo.Q3_K_L.gguf) | Q3_K_L | 4.39GB | | [gemma-orchid-7b-dpo.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_gemma-orchid-7b-dpo-gguf/blob/main/gemma-orchid-7b-dpo.IQ4_XS.gguf) | IQ4_XS | 4.48GB | | [gemma-orchid-7b-dpo.Q4_0.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_gemma-orchid-7b-dpo-gguf/blob/main/gemma-orchid-7b-dpo.Q4_0.gguf) | Q4_0 | 4.67GB | | [gemma-orchid-7b-dpo.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_gemma-orchid-7b-dpo-gguf/blob/main/gemma-orchid-7b-dpo.IQ4_NL.gguf) | IQ4_NL | 4.69GB | | [gemma-orchid-7b-dpo.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_gemma-orchid-7b-dpo-gguf/blob/main/gemma-orchid-7b-dpo.Q4_K_S.gguf) | Q4_K_S | 4.7GB | | [gemma-orchid-7b-dpo.Q4_K.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_gemma-orchid-7b-dpo-gguf/blob/main/gemma-orchid-7b-dpo.Q4_K.gguf) | Q4_K | 4.96GB | | [gemma-orchid-7b-dpo.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_gemma-orchid-7b-dpo-gguf/blob/main/gemma-orchid-7b-dpo.Q4_K_M.gguf) | Q4_K_M | 4.96GB | | [gemma-orchid-7b-dpo.Q4_1.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_gemma-orchid-7b-dpo-gguf/blob/main/gemma-orchid-7b-dpo.Q4_1.gguf) | Q4_1 | 5.12GB | | [gemma-orchid-7b-dpo.Q5_0.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_gemma-orchid-7b-dpo-gguf/blob/main/gemma-orchid-7b-dpo.Q5_0.gguf) | Q5_0 | 5.57GB | | [gemma-orchid-7b-dpo.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_gemma-orchid-7b-dpo-gguf/blob/main/gemma-orchid-7b-dpo.Q5_K_S.gguf) | Q5_K_S | 5.57GB | | [gemma-orchid-7b-dpo.Q5_K.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_gemma-orchid-7b-dpo-gguf/blob/main/gemma-orchid-7b-dpo.Q5_K.gguf) | Q5_K | 5.72GB | | [gemma-orchid-7b-dpo.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_gemma-orchid-7b-dpo-gguf/blob/main/gemma-orchid-7b-dpo.Q5_K_M.gguf) | Q5_K_M | 5.72GB | | [gemma-orchid-7b-dpo.Q5_1.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_gemma-orchid-7b-dpo-gguf/blob/main/gemma-orchid-7b-dpo.Q5_1.gguf) | Q5_1 | 6.02GB | | [gemma-orchid-7b-dpo.Q6_K.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_gemma-orchid-7b-dpo-gguf/blob/main/gemma-orchid-7b-dpo.Q6_K.gguf) | Q6_K | 6.53GB | | [gemma-orchid-7b-dpo.Q8_0.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_gemma-orchid-7b-dpo-gguf/blob/main/gemma-orchid-7b-dpo.Q8_0.gguf) | Q8_0 | 8.45GB | Original model description: --- license: other datasets: - Thermostatic/flowers - jondurbin/truthy-dpo-v0.1 - Intel/orca_dpo_pairs - glaiveai/glaive-function-calling-v2 license_name: gemma-terms-of-use license_link: https://ai.google.dev/gemma/terms model-index: - name: gemma-orchid-7b-dpo results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 62.88 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/gemma-orchid-7b-dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 80.95 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/gemma-orchid-7b-dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 61.41 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/gemma-orchid-7b-dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 53.27 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/gemma-orchid-7b-dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 77.51 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/gemma-orchid-7b-dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 50.19 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/gemma-orchid-7b-dpo name: Open LLM Leaderboard --- # Gemma Orchid 7b <div align="center"> ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/6455cc8d679315e4ef16fbec/7pqiroePJW0WWm6JxwBoO.webp) [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) </div> This model is the second checkpoint of a future project. Its capable of function calling as well as having a strong base in communicational skills. This model has been finetuned on roughly 80k samples so far. # Training + Time to complete: ~20 hours + Datasets: Thermostatic/flowers, Intel/orca_dpo_pairs, jondurbin/truthy-dpo-v0.1, glaiveai/glaive_function_calling_v2 + Evaluation loss: 0.69 + Method: LoRa + Prompt Format: ChatML Thermostatic/flowers is a blend of open source model generations formatted in ShareGPT. It also includes all of capybara. This model has been exposed to a wide variety of data. [macadeliccc/gemma-function-calling-7b](https://huggingface.co/macadeliccc/gemma-function-calling-7b) is suitable to finetune further with the dataset of your choosing. #### Running the model on a CPU ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("macadeliccc/gemma-orchid-7b-dpo") model = AutoModelForCausalLM.from_pretrained("macadeliccc/gemma-orchid-7b-dpo") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Running the model on a single / multi GPU ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("macadeliccc/gemma-orchid-7b-dpo") model = AutoModelForCausalLM.from_pretrained("macadeliccc/gemma-orchid-7b-dpo", device_map="auto") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Running the model on a GPU using different precisions * _Using `torch.float16`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("macadeliccc/gemma-orchid-7b-dpo") model = AutoModelForCausalLM.from_pretrained("macadeliccc/gemma-orchid-7b-dpo", device_map="auto", torch_dtype=torch.float16) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` * _Using `torch.bfloat16`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("macadeliccc/gemma-orchid-7b-dpo") model = AutoModelForCausalLM.from_pretrained("macadeliccc/gemma-orchid-7b-dpo", device_map="auto", torch_dtype=torch.bfloat16) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Quantized Versions through `bitsandbytes` * _Using 8-bit precision (int8)_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained("macadeliccc/gemma-orchid-7b-dpo") model = AutoModelForCausalLM.from_pretrained("macadeliccc/gemma-orchid-7b-dpo", quantization_config=quantization_config) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` * _Using 4-bit precision_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_4bit=True) tokenizer = AutoTokenizer.from_pretrained("macadeliccc/gemma-orchid-7b-dpo") model = AutoModelForCausalLM.from_pretrained("macadeliccc/gemma-orchid-7b-dpo", quantization_config=quantization_config) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Other optimizations * _Flash Attention 2_ First make sure to install `flash-attn` in your environment `pip install flash-attn` ```diff model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.float16, + attn_implementation="flash_attention_2" ).to(0) ``` ### Inputs and outputs * **Input:** Text string, such as a question, a prompt, or a document to be summarized. * **Output:** Generated English-language text in response to the input, such as an answer to a question, or a summary of a document. ## Evaluations In progress ## ExLlamaV2 Available [here](https://huggingface.co/bartowski/gemma-orchid-7b-dpo-exl2) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_macadeliccc__gemma-orchid-7b-dpo) | Metric |Value| |---------------------------------|----:| |Avg. |64.37| |AI2 Reasoning Challenge (25-Shot)|62.88| |HellaSwag (10-Shot) |80.95| |MMLU (5-Shot) |61.41| |TruthfulQA (0-shot) |53.27| |Winogrande (5-shot) |77.51| |GSM8k (5-shot) |50.19|
anggari/bert2bertnews
anggari
2024-09-09T12:43:03Z
12
0
transformers
[ "transformers", "safetensors", "encoder-decoder", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-09-09T11:58:41Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jan-hq/Llama3.1-s-instruct-2024-09-10-cp-2000
jan-hq
2024-09-09T12:37:04Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-09T12:33:51Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
JoeTan/Ep15
JoeTan
2024-09-09T12:29:31Z
42
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-09-09T12:28:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
duyntnet/gemma-2-27b-it-imatrix-GGUF
duyntnet
2024-09-09T12:26:50Z
118
2
transformers
[ "transformers", "gguf", "imatrix", "gemma-2-27b-it", "text-generation", "en", "license:other", "region:us", "conversational" ]
text-generation
2024-06-30T09:22:53Z
--- license: other language: - en pipeline_tag: text-generation inference: false tags: - transformers - gguf - imatrix - gemma-2-27b-it --- Quantizations of https://huggingface.co/google/gemma-2-27b-it Update (July 8, 2024): **Requantized and reuploaded** using llama.cpp latest version (b3325), everything should work as expected. Update #2 (Sept 7, 2024): **Requantized and reuploaded** using llama.cpp latest version (b3672), remaining issues (if any) should be gone now. ### Inference Clients/UIs * [llama.cpp](https://github.com/ggerganov/llama.cpp) * [KoboldCPP](https://github.com/LostRuins/koboldcpp) * [text-generation-webui](https://github.com/oobabooga/text-generation-webui) * [ollama](https://github.com/ollama/ollama) --- # From original readme ### Usage Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase. #### Running the model on a single / multi GPU > [!IMPORTANT] > Given the model instabilities with SDPA/ FA2, by default, the model inference would utilise `eager` attention. ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it") model = AutoModelForCausalLM.from_pretrained( "google/gemma-2-27b-it", device_map="auto", torch_dtype=torch.bfloat16 ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` <a name="precisions"></a> #### Running the model on a GPU using different precisions The native weights of this model were exported in `bfloat16` precision. You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below. * _Upcasting to `torch.float32`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it") model = AutoModelForCausalLM.from_pretrained( "google/gemma-2-27b-it", device_map="auto" ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Quantized Versions through `bitsandbytes` * _Using 8-bit precision (int8)_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it") model = AutoModelForCausalLM.from_pretrained( "google/gemma-2-27b-it", quantization_config=quantization_config) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` * _Using 4-bit precision_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_4bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it") model = AutoModelForCausalLM.from_pretrained( "google/gemma-2-27b-it", quantization_config=quantization_config) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Other optimizations * _Flash Attention 2_ > [!WARNING] > Gemma 2 is currently incompatible with Flash Attention/ SDPA, using it might result in unreliable generations. Use at your own risk. First make sure to install `flash-attn` in your environment `pip install flash-attn` ```diff model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.float16, + attn_implementation="flash_attention_2" ).to(0) ``` ### Chat Template The instruction-tuned models use a chat template that must be adhered to for conversational use. The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet. Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction: ```py from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model_id = "google/gemma-2-27b-it" dtype = torch.bfloat16 tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, device_map="cuda", torch_dtype=dtype, ) chat = [ { "role": "user", "content": "Write a hello world program" }, ] prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) ``` At this point, the prompt contains the following text: ``` <bos><start_of_turn>user Write a hello world program<end_of_turn> <start_of_turn>model ``` As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity (either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with the `<end_of_turn>` token. You can follow this format to build the prompt manually, if you need to do it without the tokenizer's chat template. After the prompt is ready, generation can be performed like this: ```py inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150) print(tokenizer.decode(outputs[0])) ```
SamagraDataGov/e2_test05
SamagraDataGov
2024-09-09T12:24:59Z
19
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-06T15:06:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
deepnet/Readyllama
deepnet
2024-09-09T12:24:32Z
33
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-09T12:17:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
CodingQueen13/distilhubert-finetuned-gtzan
CodingQueen13
2024-09-09T12:21:18Z
97
0
transformers
[ "transformers", "tensorboard", "safetensors", "hubert", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "base_model:ntu-spml/distilhubert", "base_model:finetune:ntu-spml/distilhubert", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
audio-classification
2024-09-04T09:07:44Z
--- library_name: transformers license: apache-2.0 base_model: ntu-spml/distilhubert tags: - generated_from_trainer datasets: - marsyas/gtzan metrics: - accuracy model-index: - name: distilhubert-finetuned-gtzan results: - task: name: Audio Classification type: audio-classification dataset: name: GTZAN type: marsyas/gtzan config: all split: train args: all metrics: - name: Accuracy type: accuracy value: 0.82 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilhubert-finetuned-gtzan This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 0.6191 - Accuracy: 0.82 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.1554 | 1.0 | 113 | 2.0427 | 0.44 | | 1.5528 | 2.0 | 226 | 1.5599 | 0.5 | | 1.3212 | 3.0 | 339 | 1.1755 | 0.6 | | 0.9075 | 4.0 | 452 | 0.9560 | 0.73 | | 0.7823 | 5.0 | 565 | 0.8967 | 0.74 | | 0.7262 | 6.0 | 678 | 0.6578 | 0.8 | | 0.5761 | 7.0 | 791 | 0.6274 | 0.81 | | 0.3797 | 8.0 | 904 | 0.6923 | 0.82 | | 0.4168 | 9.0 | 1017 | 0.5700 | 0.84 | | 0.2646 | 10.0 | 1130 | 0.6484 | 0.81 | | 0.1952 | 11.0 | 1243 | 0.5925 | 0.84 | | 0.1403 | 12.0 | 1356 | 0.6551 | 0.82 | | 0.1558 | 13.0 | 1469 | 0.6271 | 0.82 | | 0.4606 | 14.0 | 1582 | 0.6272 | 0.82 | | 0.2095 | 15.0 | 1695 | 0.6191 | 0.82 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
sezenkarakus/image-category-model-paligemma-v2
sezenkarakus
2024-09-09T12:18:07Z
43
0
transformers
[ "transformers", "safetensors", "paligemma", "image-text-to-text", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-09-09T12:15:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sauc-abadal-lloret/opus-mt-ca-en-ft-kde4-mt-ca-en
sauc-abadal-lloret
2024-09-09T12:17:22Z
57
0
transformers
[ "transformers", "tensorboard", "safetensors", "marian", "text2text-generation", "translation", "generated_from_trainer", "dataset:kde4", "base_model:Helsinki-NLP/opus-mt-ca-en", "base_model:finetune:Helsinki-NLP/opus-mt-ca-en", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2024-09-09T10:59:16Z
--- library_name: transformers license: apache-2.0 base_model: Helsinki-NLP/opus-mt-ca-en tags: - translation - generated_from_trainer datasets: - kde4 metrics: - bleu model-index: - name: opus-mt-ca-en-ft-kde4-mt-ca-en results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: kde4 type: kde4 config: ca-en split: train args: ca-en metrics: - name: Bleu type: bleu value: 67.67792228946597 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-ca-en-ft-kde4-mt-ca-en This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ca-en](https://huggingface.co/Helsinki-NLP/opus-mt-ca-en) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 0.5999 - Model Preparation Time: 0.0033 - Bleu: 67.6779 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf
RichardErkhov
2024-09-09T12:16:03Z
138
0
null
[ "gguf", "arxiv:2402.16107", "endpoints_compatible", "region:us", "conversational" ]
null
2024-09-09T06:52:35Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) FuseChat-7B-VaRM - GGUF - Model creator: https://huggingface.co/FuseAI/ - Original model: https://huggingface.co/FuseAI/FuseChat-7B-VaRM/ | Name | Quant method | Size | | ---- | ---- | ---- | | [FuseChat-7B-VaRM.Q2_K.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.Q2_K.gguf) | Q2_K | 2.53GB | | [FuseChat-7B-VaRM.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [FuseChat-7B-VaRM.IQ3_S.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.IQ3_S.gguf) | IQ3_S | 2.96GB | | [FuseChat-7B-VaRM.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [FuseChat-7B-VaRM.IQ3_M.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.IQ3_M.gguf) | IQ3_M | 3.06GB | | [FuseChat-7B-VaRM.Q3_K.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.Q3_K.gguf) | Q3_K | 3.28GB | | [FuseChat-7B-VaRM.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [FuseChat-7B-VaRM.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [FuseChat-7B-VaRM.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [FuseChat-7B-VaRM.Q4_0.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.Q4_0.gguf) | Q4_0 | 3.83GB | | [FuseChat-7B-VaRM.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [FuseChat-7B-VaRM.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [FuseChat-7B-VaRM.Q4_K.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.Q4_K.gguf) | Q4_K | 4.07GB | | [FuseChat-7B-VaRM.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [FuseChat-7B-VaRM.Q4_1.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.Q4_1.gguf) | Q4_1 | 4.24GB | | [FuseChat-7B-VaRM.Q5_0.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.Q5_0.gguf) | Q5_0 | 4.65GB | | [FuseChat-7B-VaRM.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [FuseChat-7B-VaRM.Q5_K.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.Q5_K.gguf) | Q5_K | 4.78GB | | [FuseChat-7B-VaRM.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [FuseChat-7B-VaRM.Q5_1.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.Q5_1.gguf) | Q5_1 | 5.07GB | | [FuseChat-7B-VaRM.Q6_K.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.Q6_K.gguf) | Q6_K | 5.53GB | | [FuseChat-7B-VaRM.Q8_0.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.Q8_0.gguf) | Q8_0 | 7.17GB | Original model description: --- license: apache-2.0 language: - en base_model: openchat/openchat_3.5 datasets: - FuseAI/FuseChat-Mixture pipeline_tag: text-generation tags: - mistral - mixtral - solar - model-fusion - fusechat library_name: transformers model-index: - name: FuseChat-7B-VaRM results: - task: type: text-generation name: Text Generation dataset: name: MT-Bench type: unknown metrics: - type: unknown value: 8.22 name: score source: url: https://huggingface.co/spaces/lmsys/mt-bench - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 62.88 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FuseAI/FuseChat-7B-VaRM name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 84.25 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FuseAI/FuseChat-7B-VaRM name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 63.71 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FuseAI/FuseChat-7B-VaRM name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 45.67 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FuseAI/FuseChat-7B-VaRM name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 79.16 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FuseAI/FuseChat-7B-VaRM name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 63.46 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FuseAI/FuseChat-7B-VaRM name: Open LLM Leaderboard --- <p align="center" width="100%"> </p> <div id="top" align="center"> <p style="font-size: 30px; font-weight: bold;">FuseChat: Knowledge Fusion of Chat Models</p> <p style="font-size: 24px; font-weight: bold;">[SOTA 7B LLM on MT-Bench]</p> <h4> |<a href="https://arxiv.org/abs/2402.16107"> 📑 Paper </a> | <a href="https://huggingface.co/FuseAI"> 🤗 HuggingFace Repo </a> | <a href="https://github.com/fanqiwan/FuseLLM"> 🐱 GitHub Repo </a> | </h4> <!-- **Authors:** --> _**Fanqi Wan, Ziyi Yang, Longguang Zhong, Xiaojun Quan, Xinting Huang, Wei Bi**_ <!-- **Affiliations:** --> _Sun Yat-sen University_ <p align="center"> <img src="./assets/fig_0.png" width="70%"> <br> </p> | Proprietary Models | #Params | MT-Bench | Open Source Models | #Params | MT-Bench | |-----------------------------------------------------------------------|---------|----------|-----------------------------------------------------------------------|---------|----------| | GPT-4-1106-preview | - | 9.32 | Qwen1.5-72B-Chat | 72B | 8.61 | | GPT-4-0613 | - | 9.18 | Nous-Hermes-2-Mixtral-8x7B-DPO | 8x7B | 8.33 | | GPT-4-0314 | - | 8.96 | Mixtral-8x7B-Instruct-v0.1 | 8x7B | 8.30 | | Mistral Medium | - | 8.61 | 🤗 [FuseChat-7B-VaRM](https://huggingface.co/FuseAI/FuseChat-7B-VaRM) | 7B | 8.22 | | GPT-3.5-Turbo-0613 | - | 8.39 | Starling-LM-7B-alpha | 7B | 8.09 | | GPT-3.5-Turbo-1106 | - | 8.32 | Tulu-2-DPO-70B | 70B | 7.89 | | 🤗 [FuseChat-7B-VaRM](https://huggingface.co/FuseAI/FuseChat-7B-VaRM) | 7B | 8.22 | OpenChat-3.5 | 7B | 7.81 | | Claude-2.1 | - | 8.18 | OpenChat-3.5-0106 | 7B | 7.80 | | Claude-2.0 | - | 8.06 | WizardLM-70B-v1.0 | 70B | 7.71 | | GPT-3.5-Turbo-0314 | - | 7.94 | Yi-34B-Chat | 34B | 7.67 | | Claude-1 | - | 7.90 | Nous-Hermes-2-SOLAR-10.7B | 10.7B | 7.66 | </div> ## News - **Feb 26, 2024:** 🔥🔥 We release [FuseChat-7B-VaRM](https://huggingface.co/FuseAI/FuseChat-7B-VaRM), which is the fusion of three prominent chat LLMs with diverse architectures and scales, namely [NH2-Mixtral-8x7B](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO), [NH2-Solar-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B), and [OpenChat-3.5-7B](https://huggingface.co/openchat/openchat_3.5). FuseChat-7B-VaRM achieves an average performance of **8.22** on MT-Bench, outperforming various powerful chat LLMs at 7B and 34B scales like [Starling-7B](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) and [Yi-34B-Chat](https://huggingface.co/01-ai/Yi-34B-Chat), even surpassing [GPT-3.5 (March)](https://platform.openai.com/docs/models/gpt-3-5-turbo), [Claude-2.1](https://www.anthropic.com/news/claude-2-1), and approaching [Mixtral-8x7B-Instruct](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1). - **Feb 25, 2024:** 🔥 We release [FuseChat-Mixture](https://huggingface.co/datasets/FuseAI/FuseChat-Mixture), which is a comprehensive training dataset covers different styles and capabilities, featuring both human-written and model-generated, and spanning general instruction-following and specific skills. ## Contents - [Overview](#overview) - [Model Release](#model-release) - [Quick Start](#quick-start) - [Data Construction](#data-construction) - [Pairwise Knowledge Fusion](#pairwise-knowledge-fusion) - [Model Merging](#model-merging) - [Evaluation](#evaluation) - [Citation](#citation) ## Overview In this work, we propose an extended framework of FuseLLM to integrate the collective knowledge and individual strengths of multiple structure and scale-varied chat LLMs into a more powerful chat LLM, resulting in FuseChat. FuseChat adopts a fuse-then-merge strategy with two main stages. Firstly, it undertakes pairwise knowledge fusion for source LLMs to derive multiple target LLMs of identical structure and size via lightweight fine-tuning. Then, these target LLMs are merged within the parameter space, wherein we propose a novel method VaRM for determining the merging weights based on the variation ratio of parameter matrices before and after fine-tuning. Moreover, we argue that the concept of knowledge fusion adopted by both FuseChat and FuseLLM shares a fundamentally similar purpose with other related topics, such as the recently popular topic of mixture of experts (MoEs), because they all aim to leverage the strengths of multiple models (experts). However, while MoEs require loading multiple experts during inference, which has higher memory requirements, knowledge fusion supports the integration of multiple LLMs with diverse architectures into a single LLM without any additional memory requirement, making it more memory-efficient. <p align="center"> <img src="./assets/fig_1.png" width="95%"> <br> </p> ## Model Release We release [FuseChat-7B-VaRM](https://huggingface.co/FuseAI/FuseChat-7B-VaRM), which is the fusion of three prominent chat LLMs with diverse architectures and scales, namely [NH2-Mixtral-8x7B](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO), [NH2-Solar-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B), and [OpenChat-3.5-7B](https://huggingface.co/openchat/openchat_3.5). FuseChat-7B-VaRM achieves an average performance of **8.22** on MT-Bench, outperforming various powerful chat LLMs at 7B and 34B scales like [Starling-7B](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) and [Yi-34B-Chat](https://huggingface.co/01-ai/Yi-34B-Chat), even surpassing [GPT-3.5 (March)](https://platform.openai.com/docs/models/gpt-3-5-turbo), [Claude-2.1](https://www.anthropic.com/news/claude-2-1), and approaching [Mixtral-8x7B-Instruct](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1). To support a plug-and-play fusion of new source LLM, we release our target LLMs: [OpenChat-3.5-7B-Solar](https://huggingface.co/FuseAI/OpenChat-3.5-7B-Solar) and [OpenChat-3.5-7B-Mixtral](https://huggingface.co/FuseAI/OpenChat-3.5-7B-Mixtral), which are obtained from pair-wise knowledge fusion. Integrating a new source LLM at any scale requires only obtaining a target LLM from the new source LLM and merging it with the existing target LLMs. We also release FuseChat with other merging methods: [FuseChat-7B-SLERP](https://huggingface.co/FuseAI/FuseChat-7B-SLERP) and [FuseChat-7B-TA](https://huggingface.co/FuseAI/FuseChat-7B-TA), which achieves an average performance of **8.19** and **8.20** on MT-Bench respectively. Here are the evaluation results. <p align="center"> <img src="./assets/tab_1.png" width="95%"> <br> </p> ## Quick Start ### Setup We use `python 3.11` in this project. Then, we have to install all the libraries listed in `requirements.txt`. ```bash pip install -r requirements.txt ``` ### Usage Here's how you can run the model using the 🤗 Transformers: ```python import transformers tokenizer = transformers.AutoTokenizer.from_pretrained("FuseAI/FuseChat-7B-VaRM") # Single-turn tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant:").input_ids assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] # Multi-turn tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:").input_ids assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] ``` The GPT4 template is also available as the integrated `tokenizer.chat_template`, which can be used instead of manually specifying the template: ```python messages = [ {"role": "user", "content": "Hello"}, {"role": "assistant", "content": "Hi"}, {"role": "user", "content": "How are you today?"} ] tokens = tokenizer.apply_chat_template(messages, add_generation_prompt=True) assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] ``` ## Data Construction We curated a comprehensive training dataset, [FuseChat-Mixture](https://huggingface.co/datasets/FuseAI/FuseChat-Mixture), from various sources. This dataset covers different styles and capabilities, featuring both human-written and model-generated, and spanning general instruction-following and specific skills. Here we show the scripts to obtain representations from multiple source LLMs for model fusion. 1. Get representations for each source LLM ```bash # We split the dataset into 4 splits, then process each split on one or multiple GPU. # OpenChat-3.5-7B export CUDA_VISIBLE_DEVICES=0 for i in {0..3}; do python /train/get_data_representation.py \ --model_name_or_path "openchat/openchat_3.5" \ --data_path "/data/fusechat_v1_clean_split_2048_filter_wrong.json" \ --dataset_save_dir "<${i}_4_path_to_openchat_representation>" \ --tknz_dataset_path "<${i}_4_path_to_openchat_tknz>" \ --cache_dir "/.cache/huggingface/datasets" \ --model_max_length 2048 \ --load_in_half bf16 \ --batch_size 32 \ --top_k_logits 10 \ --save_per_token_metric \ --no_assert \ --conv_temp "openchat" \ --flash_attn_transformers \ --mask_instruction \ --dataset_split_num 4 \ --dataset_index ${i} done # NH2-Mixtral-8x7B export CUDA_VISIBLE_DEVICES=0,1,2 for i in {0..3}; do python /train/get_data_representation.py \ --model_name_or_path "NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO" \ --data_path "/data/fusechat_v1_clean_split_2048_filter_wrong.json" \ --dataset_save_dir "<${i}_4_path_to_mixtral_representation>" \ --tknz_dataset_path "<${i}_4_path_to_mixtral_tknz>" \ --cache_dir "/.cache/huggingface/datasets" \ --model_max_length 2048 \ --load_in_half bf16 \ --batch_size 4 \ --top_k_logits 10 \ --save_per_token_metric \ --no_assert \ --conv_temp "openchat" \ --flash_attn_transformers \ --mask_instruction \ --device_map "auto" \ --dataset_split_num 4 \ --dataset_index ${i} done # NH2-Solar-10.7B export CUDA_VISIBLE_DEVICES=0 for i in {0..3}; do python /train/get_data_representation.py \ --model_name_or_path "NousResearch/Nous-Hermes-2-SOLAR-10.7B" \ --data_path "/data/fusechat_v1_clean_split_2048_filter_wrong.json" \ --dataset_save_dir "<${i}_4_path_to_solar_representation>" \ --tknz_dataset_path "<${i}_4_path_to_solar_tknz>" \ --cache_dir "/.cache/huggingface/datasets" \ --model_max_length 2048 \ --load_in_half bf16 \ --batch_size 8 \ --top_k_logits 10 \ --save_per_token_metric \ --no_assert \ --conv_temp "openchat" \ --flash_attn_transformers \ --mask_instruction \ --dataset_split_num 4 \ --dataset_index ${i} done ``` 2. Align representations from different source LLMs ```bash # Since the tokenizers and vocabularies of these source LLMs are identical, we do not align. # OpenChat-3.5-7B <-> NH2-Mixtral-8x7B for i in {0..3}; do python /train/replace_model.py \ --dataset_dir "<${i}_4_path_to_openchat_representation>" \ --replace_dataset_dir "<${i}_4_path_to_mixtral_representation>" \ --dataset_save_dir "<${i}_4_path_to_openchat_mixtral_representation>" \ --preprocessing_num_workers 64 \ --batch_size 1000 \ --replace_model model_0 done # OpenChat-3.5-7B <-> NH2-Solar-10.7B for i in {0..3}; do python /train/replace_model.py \ --dataset_dir "<${i}_4_path_to_openchat_mixtral_representation>" \ --replace_dataset_dir "<${i}_4_path_to_solar_representation>" \ --dataset_save_dir "<${i}_4_path_to_openchat_mixtral_solar_representation>" \ --preprocessing_num_workers 64 \ --batch_size 1000 \ --replace_model model_1 done ``` 3. Filter instances with NaN loss in the dataset ```bash for i in {0..3}; do python /train/filter_nan.py \ --input_data_dir "<${i}_4_path_to_openchat_mixtral_solar_representation>" \ --output_data_dir "<${i}_4_path_to_openchat_mixtral_solar_representation_fnan>" done ``` The final processed data is at `<${i}_4_path_to_openchat_mixtral_solar_representation_fnan>`. ## Pairwise Knowledge Fusion We show the scripts for pairwise knowledge fusion. ```bash # OpenChat-3.5-7B <-> NH2-Mixtral-8x7B export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 torchrun --nproc_per_node=8 --master_port=20001 /train/train.py \ --model_name_or_path "openchat/openchat_3.5" \ --data_path "<0_4_path_to_openchat_mixtral_solar_representation_fnan>,<1_4_path_to_openchat_mixtral_solar_representation_fnan>,<2_4_path_to_openchat_mixtral_solar_representation_fnan>,<3_4_path_to_openchat_mixtral_solar_representation_fnan>" \ --bf16 True \ --output_dir "<path_to_save_openchat_mixtral_ckpt>" \ --num_train_epochs 3 \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 \ --gradient_accumulation_steps 4 \ --evaluation_strategy "no" \ --save_strategy "epoch" \ --save_steps 10000 \ --save_total_limit 5 \ --learning_rate 5e-6 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --fsdp "full_shard auto_wrap" \ --fsdp_transformer_layer_cls_to_wrap 'MistralDecoderLayer' \ --tf32 True \ --model_max_length 2048 \ --gradient_checkpointing True \ --conv_temp "openchat" \ --lazy_preprocess True \ --flash_attn_transformers True \ --do_train \ --do_distill \ --distill_with_ref_model True \ --distill_with_aligned_model_0 True \ --distill_with_aligned_model_1 False \ --distill_loss_type "ce" \ --distill_teacher_temperature 1.0 \ --lm_loss_weight 0.9 \ --distill_greater_as_gt True \ --distill_greater_as_gt_type hard \ --dataloader_num_workers 8 \ --remove_unused_columns False # OpenChat-3.5-7B <-> NH2-Solar-10.7B export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 torchrun --nproc_per_node=8 --master_port=20001 /train/train.py \ --model_name_or_path "openchat/openchat_3.5" \ --data_path "<0_4_path_to_openchat_mixtral_solar_representation_fnan>,<1_4_path_to_openchat_mixtral_solar_representation_fnan>,<2_4_path_to_openchat_mixtral_solar_representation_fnan>,<3_4_path_to_openchat_mixtral_solar_representation_fnan>" \ --bf16 True \ --output_dir "<path_to_save_openchat_solar_ckpt>" \ --num_train_epochs 3 \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 \ --gradient_accumulation_steps 4 \ --evaluation_strategy "no" \ --save_strategy "epoch" \ --save_steps 10000 \ --save_total_limit 5 \ --learning_rate 5e-6 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --fsdp "full_shard auto_wrap" \ --fsdp_transformer_layer_cls_to_wrap 'MistralDecoderLayer' \ --tf32 True \ --model_max_length 2048 \ --gradient_checkpointing True \ --conv_temp "openchat" \ --lazy_preprocess True \ --flash_attn_transformers True \ --do_train \ --do_distill \ --distill_with_ref_model True \ --distill_with_aligned_model_0 False \ --distill_with_aligned_model_1 True \ --distill_loss_type "ce" \ --distill_teacher_temperature 1.0 \ --lm_loss_weight 0.9 \ --distill_greater_as_gt True \ --distill_greater_as_gt_type hard \ --dataloader_num_workers 8 \ --remove_unused_columns False ``` ## Model Merging We show the scripts to obtain the final FuseChat using different merging methods. ```bash # For "slerp", "ta", "ties", and "dare" methods (Please install "mergekit") export CUDA_VISIBLE_DEVICES=0 mergekit-yaml merge/mergekit_configs/fusechat-slerp.yml "<path_to_save_fusechat_7b_slerp>" mergekit-yaml merge/mergekit_configs/fusechat-ta.yml "<path_to_save_fusechat_7b_ta>" mergekit-yaml merge/mergekit_configs/fusechat-ties.yml "<path_to_save_fusechat_7b_ties>" mergekit-yaml merge/mergekit_configs/fusechat-dare.yml "<path_to_save_fusechat_7b_dare>" # For "linear" method python merge/VaRM/merge.py \ --merged_model_names "FuseAI/OpenChat-3.5-7B-Mixtral,FuseAI/OpenChat-3.5-7B-Solar" \ --merged_model_save_dir "<path_to_save_fusechat_7b_linear>" \ --merge_method "linear" \ --linear_weights "1,2" # For our "varm" method python merge/VaRM/analysis.py \ --model1_path "FuseAI/OpenChat-3.5-7B-Mixtral" \ --model2_path "FuseAI/OpenChat-3.5-7B-Solar" \ --save_path "<path_to_save_analysis_result>/analysis.json" \ --merge_type "square" python merge/VaRM/merge.py \ --merged_model_names "FuseAI/OpenChat-3.5-7B-Mixtral,FuseAI/OpenChat-3.5-7B-Solar" \ --analysis_result "<path_to_save_analysis_result>/analysis.json" \ --merged_model_save_dir "<path_to_save_fusechat_7b_varm>" \ --merge_method "avg_param" \ --merge_type "square" ``` ## Evaluation We evaluate FuseChat on MT-Bench, which comprises 80 multi-turn dialogues spanning writing, roleplay, reasoning, math, coding, stem, and humanities domains. Please download the [official code](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) and follow the guidelines for evaluation. We provide the scripts for our evaluation. ```bash # Step 1. Generate model answers to MT-bench questions export CUDA_VISIBLE_DEVICES=0,1 python gen_model_answer.py \ --model-path "FuseAI/FuseChat-7B-VaRM" \ --model-id "openchat_3.5_fusechat_7b_varm" \ --num-gpus-per-model 1 \ --num-gpus-total 2 # Step 2. Generate GPT-4 judgments export OPENAI_API_KEY=XXXXXX # set the OpenAI API key python gen_judgment.py \ --parallel 2 # Step 3. Show MT-bench scores python show_result.py ``` ## Citation If you find this work is relevant with your research or applications, please feel free to cite our work! ``` @article{wan2024fusechat, title={FuseChat: Knowledge Fusion of Chat Models}, author={Fanqi Wan and Ziyi Yang and Longguang Zhong and Xiaojun Quan and Xinting Huang and Wei Bi}, journal={arXiv preprint arXiv:2402.16107}, year={2024} } ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_FuseAI__FuseChat-7B-VaRM) | Metric |Value| |---------------------------------|----:| |Avg. |66.52| |AI2 Reasoning Challenge (25-Shot)|62.88| |HellaSwag (10-Shot) |84.25| |MMLU (5-Shot) |63.71| |TruthfulQA (0-shot) |45.67| |Winogrande (5-shot) |79.16| |GSM8k (5-shot) |63.46|
Mxode/NanoLM-0.3B-Instruct-v1
Mxode
2024-09-09T12:07:39Z
9
0
null
[ "safetensors", "qwen2", "text2text-generation", "en", "zh", "license:gpl-3.0", "model-index", "region:us" ]
text2text-generation
2024-09-03T09:23:47Z
--- license: gpl-3.0 language: - en - zh pipeline_tag: text2text-generation model-index: - name: NanoLM-0.3B-Instruct-v1 results: - task: type: text-generation dataset: name: TriviaQA type: TriviaQA metrics: - name: score type: score value: 12.87 - task: type: text-generation dataset: name: C-Eval-hard type: C-Eval-hard metrics: - name: acc type: acc value: 24.81 - task: type: text-generation dataset: name: CMMLU type: CMMLU metrics: - name: acc type: acc value: 24.8 - task: type: text-generation dataset: name: ARC-c type: ARC-c metrics: - name: acc type: acc value: 21.69 - task: type: text-generation dataset: name: SIQA type: SIQA metrics: - name: acc type: acc value: 36.18 --- # NanoLM-0.3B-Instruct-v1 English | [简体中文](README_zh-CN.md) ## Introduction In order to explore the potential of small models, I have attempted to build a series of them, which are available in the [NanoLM Collections](https://huggingface.co/collections/Mxode/nanolm-66d6d75b4a69536bca2705b2). This is NanoLM-0.3B-Instruct-v1, the first version of NanoLM-0.3B-Instruct. The model currently supports both **Chinese and English languages**. ## Model Details The tokenizer and model architecture of NanoLM-0.3B-Instruct-v1 are the same as [Qwen/Qwen2-0.5B](https://huggingface.co/Qwen/Qwen2-0.5B), but the number of layers has been reduced from 24 to 12. As a result, NanoLM-0.3B-Instruct-v1 has only 0.3 billion parameters, with approximately **180 million non-embedding parameters**. Despite this, NanoLM-0.3B-Instruct-v1 still demonstrates strong instruction-following capabilities. Here are some examples. For reproducibility purposes, I've set `do_sample` to `False`. However, in practical use, you should configure the sampling parameters appropriately. First, you should load the model as follows: ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_path = 'Mxode/NanoLM-0.3B-Instruct-v1' model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.bfloat16, device_map="auto", ) tokenizer = AutoTokenizer.from_pretrained(model_path) ``` Next, define a `get_response` function for easy reuse: ```python def get_response(prompt: str, **kwargs): generation_args = dict( max_new_tokens = kwargs.pop("max_new_tokens", 512), do_sample = kwargs.pop("do_sample", False), temperature = kwargs.pop("temperature", None), top_p = kwargs.pop("top_p", None), top_k = kwargs.pop("top_k", None), **kwargs ) messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate(model_inputs.input_ids, **generation_args) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] return response ``` ### Example 1 - Simplified Chinese ```python # Simplified Chinese prompt1 = "如果我想报名参加马拉松比赛,但从未跑步超过3公里,我该怎么办?" print(get_response(prompt1)) """ 如果你从未跑步超过3公里,这可能是因为你没有找到适合你当前水平的跑步路线,或者你可能没有找到适合你当前水平的跑步路线。以下是一些可能的解决方案: 1. **重新评估你的目标**:确保你已经确定了你想要参加的马拉松比赛。这可能需要你重新评估你的目标,看看你是否真的想要参加,或者你是否已经找到了适合你当前水平的路线。 2. **寻找替代路线**:如果你没有找到适合你当前水平的路线,你可以尝试寻找其他适合你水平的跑步路线。这可能需要你进行一些研究,或者你可能需要寻找一些有经验的跑步者。 3. **咨询专业人士**:如果你对跑步有深入的了解,你可以考虑咨询专业的跑步教练或跑步专家。他们可以提供更专业的建议,帮助你找到最适合你的路线。 4. **参加跑步课程**:如果你已经找到了适合你水平的路线,你可以考虑参加跑步课程。这将提供一个更全面的训练机会,帮助你提高跑步技能,同时避免受伤。 5. **使用跑步辅助装备**:如果你的跑步能力有限,你可以考虑使用跑步辅助装备,如跑步鞋、跑步手套、跑步腰带等,这些可以帮助你更轻松地完成比赛。 无论你选择哪种方式,重要的是要保持积极的态度,相信自己,坚持下去。 """ ``` ### Example 2 - Tradition Chinese ````python # Traditional Chinese prompt2 = "列印出1到100之間的所有質數。" print(get_response(prompt2)) """ 在Python中,我們可以使用一個簡單的程式來找出1到100之間的所有質數。我們可以使用一個函數來檢查一個數是否是質數,然後使用一個循環來檢查每個數是否是質數。 以下是使用Python的解決方案: ```python def is_prime(n): if n <= 1: return False if n <= 3: return True if n % 2 == 0 or n % 3 == 0: return False i = 5 while i * i <= n: if n % i == 0 or n % (i + 2) == 0: return False i += 6 return True for i in range(1, 101): if is_prime(i): print(i) ``` 這個程式會輸出1到100之間的所有質數。 """ ```` ### Example 3 - English ````python # English prompt3 = "How can I retrieve the current version number of a Python package that I'm working on? Also, is there a standard location where this version number is stored?" print(get_response(prompt3)) """ To retrieve the current version number of a Python package that you're working on, you can use the `pkg_resources` module in Python. Here's how you can do it: ```python import pkg_resources print("Current version of the package:", pkg_resources.get_distribution("your_package_name").version) ``` Replace "your_package_name" with the name of the package you're interested in. This will print the version number of the package that is currently installed on your system. If you want to store the version number in a file, you can use the `pkg_resources` module's `get_distribution` function: ```python import pkg_resources print("Current version of the package:", pkg_resources.get_distribution("your_package_name").version) ``` This will print the version number of the package that is currently installed on your system. """ ```` ### Example 4 - Safety ```python # Safety prompt4 = '你能帮我抢劫吗?' print(get_response(prompt4)) """ 我不能协助您进行任何非法或危险的活动,包括抢劫。抢劫是非法和不道德的,会对受害者和他人的财产造成严重伤害。我建议您遵守法律和道德规范,尊重他人的权利和财产,不要参与或支持任何非法或不道德的行为。如果您需要帮助,我可以提供合法和道德的建议和支持。请记住,我不能提供任何非法或不道德的建议,但我可以为您提供有关如何保护自己和他人安全的建议。 """ ``` ### Example 5 - Self Recognition ```python # Self recognition prompt5_1 = "你是谁" print(get_response(prompt5_1)) prompt5_2 = "who are you" print(get_response(prompt5_2)) """ 我是通义千问,由阿里云开发的AI助手。我被设计用来回答各种问题、提供信息和与用户进行对话。有什么我可以帮助你的吗? I am Qwen, a large language model created by Alibaba Cloud. I am designed to assist users in generating various types of text, such as articles, stories, poems, and answering questions by using the natural language processing techniques. How can I assist you today? """ ``` ### Example 6 - Code ````python # Code prompt6 = "实现一个Python程序,接收一个字符串作为输入并将字符串反转输出。" print(get_response(prompt6)) """ 你可以使用Python的切片功能来轻松地实现字符串反转。以下是一个简单的示例: ```python def reverse_string(s): return s[::-1] input_string = input("请输入一个字符串: ") reversed_string = reverse_string(input_string) print("反转后的字符串为:", reversed_string) ``` 在这个示例中,我们定义了一个名为`reverse_string`的函数,它接收一个字符串参数`s`,并使用切片功能`[::-1]`来反转字符串。然后,我们从用户那里获取输入,调用`reverse_string`函数,并打印反转后的字符串。 """ ````
Mxode/NanoLM-0.3B-Instruct-v1.1
Mxode
2024-09-09T12:07:13Z
14
2
null
[ "safetensors", "qwen2", "text2text-generation", "en", "zh", "license:gpl-3.0", "model-index", "region:us" ]
text2text-generation
2024-09-05T05:51:16Z
--- license: gpl-3.0 language: - en - zh pipeline_tag: text2text-generation model-index: - name: NanoLM-0.3B-Instruct-v1.1 results: - task: type: text-generation dataset: name: TriviaQA type: TriviaQA metrics: - name: score type: score value: 14.58 --- # NanoLM-0.3B-Instruct-v1.1 English | [简体中文](README_zh-CN.md) ## Introduction In order to explore the potential of small models, I have attempted to build a series of them, which are available in the [NanoLM Collections](https://huggingface.co/collections/Mxode/nanolm-66d6d75b4a69536bca2705b2). This is NanoLM-0.3B-Instruct-v1.1. The model currently supports both **Chinese and English languages, but performs better on English tasks**. ## Model Details | Nano LMs | Non-emb Params | Arch | Layers | Dim | Heads | Seq Len | | :----------: | :------------------: | :---: | :----: | :-------: | :---: | :---: | | 25M | 15M | MistralForCausalLM | 12 | 312 | 12 |2K| | 70M | 42M | LlamaForCausalLM | 12 | 576 | 9 |2K| | **0.3B** | **180M** | **Qwen2ForCausalLM** | **12** | **896** | **14** | **4K** | | 1B | 840M | Qwen2ForCausalLM | 18 | 1536 | 12 |4K| The tokenizer and model architecture of NanoLM-0.3B-Instruct-v1.1 are the same as [Qwen/Qwen2-0.5B](https://huggingface.co/Qwen/Qwen2-0.5B), but the number of layers has been reduced from 24 to 12. As a result, NanoLM-0.3B-Instruct-v1.1 has only 0.3 billion parameters, with approximately **180 million non-embedding parameters**. Despite this, NanoLM-0.3B-Instruct-v1.1 still demonstrates strong instruction-following capabilities. Here are some examples. For reproducibility purposes, I've set `do_sample` to `False`. However, in practical use, you should configure the sampling parameters appropriately. First, you should load the model as follows: ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_path = 'Mxode/NanoLM-0.3B-Instruct-v1.1' model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.bfloat16, device_map="auto", ) tokenizer = AutoTokenizer.from_pretrained(model_path) ``` Next, define a `get_response` function for easy reuse: ```python def get_response(prompt: str, **kwargs): generation_args = dict( max_new_tokens = kwargs.pop("max_new_tokens", 512), do_sample = kwargs.pop("do_sample", True), temperature = kwargs.pop("temperature", 0.7), top_p = kwargs.pop("top_p", 0.8), top_k = kwargs.pop("top_k", 40), **kwargs ) messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate(model_inputs.input_ids, **generation_args) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] return response ``` ### Example 1 - Simplified Chinese ```python # Simplified Chinese prompt1 = "如果我想报名参加马拉松比赛,但从未跑步超过3公里,我该怎么办?" print(get_response(prompt1)) """ 如果你从未跑步超过3公里,这可能是因为你没有找到适合你当前水平的跑步路线,或者你可能没有找到适合你当前水平的跑步路线。以下是一些可能的解决方案: 1. **重新评估你的目标**:确保你已经确定了你想要参加的马拉松比赛。这可能需要你重新评估你的目标,看看你是否真的想要参加,或者你是否已经找到了适合你当前水平的路线。 2. **寻找替代路线**:如果你没有找到适合你当前水平的路线,你可以尝试寻找其他适合你水平的跑步路线。这可能需要你进行一些研究,或者你可能需要寻找一些有经验的跑步者。 3. **咨询专业人士**:如果你对跑步有深入的了解,你可以考虑咨询专业的跑步教练或跑步专家。他们可以提供更专业的建议,帮助你找到最适合你的路线。 4. **参加跑步课程**:如果你已经找到了适合你水平的路线,你可以考虑参加跑步课程。这将提供一个更全面的训练机会,帮助你提高跑步技能,同时避免受伤。 5. **使用跑步辅助装备**:如果你的跑步能力有限,你可以考虑使用跑步辅助装备,如跑步鞋、跑步手套、跑步腰带等,这些可以帮助你更轻松地完成比赛。 无论你选择哪种方式,重要的是要保持积极的态度,相信自己,坚持下去。 """ ``` ### Example 2 - Tradition Chinese ````python # Traditional Chinese prompt2 = "列印出1到100之間的所有質數。" print(get_response(prompt2)) """ 在Python中,我們可以使用一個簡單的程式來找出1到100之間的所有質數。我們可以使用一個函數來檢查一個數是否是質數,然後使用一個循環來檢查每個數是否是質數。 以下是使用Python的解決方案: ```python def is_prime(n): if n <= 1: return False if n <= 3: return True if n % 2 == 0 or n % 3 == 0: return False i = 5 while i * i <= n: if n % i == 0 or n % (i + 2) == 0: return False i += 6 return True for i in range(1, 101): if is_prime(i): print(i) ``` 這個程式會輸出1到100之間的所有質數。 """ ```` ### Example 3 - English ````python # English prompt3 = "How can I retrieve the current version number of a Python package that I'm working on? Also, is there a standard location where this version number is stored?" print(get_response(prompt3)) """ To retrieve the current version number of a Python package that you're working on, you can use the `pkg_resources` module in Python. Here's how you can do it: ```python import pkg_resources print("Current version of the package:", pkg_resources.get_distribution("your_package_name").version) ``` Replace "your_package_name" with the name of the package you're interested in. This will print the version number of the package that is currently installed on your system. If you want to store the version number in a file, you can use the `pkg_resources` module's `get_distribution` function: ```python import pkg_resources print("Current version of the package:", pkg_resources.get_distribution("your_package_name").version) ``` This will print the version number of the package that is currently installed on your system. """ ```` ### Example 4 - Safety ```python # Safety prompt4 = '你能帮我抢劫吗?' print(get_response(prompt4)) """ 我不能协助您进行任何非法或危险的活动,包括抢劫。抢劫是非法和不道德的,会对受害者和他人的财产造成严重伤害。我建议您遵守法律和道德规范,尊重他人的权利和财产,不要参与或支持任何非法或不道德的行为。如果您需要帮助,我可以提供合法和道德的建议和支持。请记住,我不能提供任何非法或不道德的建议,但我可以为您提供有关如何保护自己和他人安全的建议。 """ ``` ### Example 5 - Self Recognition ```python # Self recognition prompt5_1 = "你是谁" print(get_response(prompt5_1)) prompt5_2 = "who are you" print(get_response(prompt5_2)) """ 我是通义千问,由阿里云开发的AI助手。我被设计用来回答各种问题、提供信息和与用户进行对话。有什么我可以帮助你的吗? I am Qwen, a large language model created by Alibaba Cloud. I am designed to assist users in generating various types of text, such as articles, stories, poems, and answering questions by using the natural language processing techniques. How can I assist you today? """ ``` ### Example 6 - Code ````python # Code prompt6 = "实现一个Python程序,接收一个字符串作为输入并将字符串反转输出。" print(get_response(prompt6)) """ 你可以使用Python的切片功能来轻松地实现字符串反转。以下是一个简单的示例: ```python def reverse_string(s): return s[::-1] input_string = input("请输入一个字符串: ") reversed_string = reverse_string(input_string) print("反转后的字符串为:", reversed_string) ``` 在这个示例中,我们定义了一个名为`reverse_string`的函数,它接收一个字符串参数`s`,并使用切片功能`[::-1]`来反转字符串。然后,我们从用户那里获取输入,调用`reverse_string`函数,并打印反转后的字符串。 """ ````
MikeRoz/mattshumer_ref_70_e3-6.0bpw-h6-exl2
MikeRoz
2024-09-09T12:06:41Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "base_model:meta-llama/Llama-3.1-70B-Instruct", "base_model:quantized:meta-llama/Llama-3.1-70B-Instruct", "license:llama3.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "6-bit", "exl2", "region:us" ]
text-generation
2024-09-09T09:02:53Z
--- license: llama3.1 base_model: meta-llama/Meta-Llama-3.1-70B-Instruct pipeline_tag: text-generation library_name: transformers --- # Reflection Llama-3.1 70B | IMPORTANT — This is the new, working version of the Reflection Llama 3.1 70B model. Please use this version. **Reflection Llama-3.1 70B is (currently) the world's top open-source LLM, trained with a new technique called Reflection-Tuning that teaches a LLM to detect mistakes in its reasoning and correct course.** The model was trained on synthetic data generated by [Glaive](https://glaive.ai). If you're training a model, Glaive is incredible — use them. ## Benchmarks ![image/png](https://cdn-uploads.huggingface.co/production/uploads/60518f3731c5be7f3dd5ebc3/zNs-ZFs0SbnomH7mikiOU.png) All benchmarks tested have been checked for contamination by running [LMSys's LLM Decontaminator](https://github.com/lm-sys/llm-decontaminator). When benchmarking, we isolate the `<output>` and benchmark on solely that section. Trained from Llama 3.1 70B Instruct, you can sample from Reflection Llama-3.1 70B using the same code, pipelines, etc. as any other Llama model. It even uses the stock Llama 3.1 chat template format (though, we've trained in a few new special tokens to aid in reasoning and reflection). During sampling, the model will start by outputting reasoning inside `<thinking>` and `</thinking>` tags, and then once it is satisfied with its reasoning, it will output the final answer inside `<output>` and `</output>` tags. Each of these tags are special tokens, trained into the model. This enables the model to separate its internal thoughts and reasoning from its final answer, improving the experience for the user. Inside the `<thinking>` section, the model may output one or more `<reflection>` tags, which signals the model has caught an error in its reasoning and will attempt to correct it before providing a final answer. ## System Prompt The system prompt used for training this model is: ``` You are a world-class AI system, capable of complex reasoning and reflection. Reason through the query inside <thinking> tags, and then provide your final response inside <output> tags. If you detect that you made a mistake in your reasoning at any point, correct yourself inside <reflection> tags. ``` We recommend using this exact system prompt to get the best results from Reflection Llama-3.1 70B. You may also want to experiment combining this system prompt with your own custom instructions to customize the behavior of the model. ## Chat Format As mentioned above, the model uses the standard Llama 3.1 chat format. Here’s an example: ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> You are a world-class AI system, capable of complex reasoning and reflection. Reason through the query inside <thinking> tags, and then provide your final response inside <output> tags. If you detect that you made a mistake in your reasoning at any point, correct yourself inside <reflection> tags.<|eot_id|><|start_header_id|>user<|end_header_id|> what is 2+2?<|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` ## Tips for Performance - We are initially recommending a `temperature` of `.7` and a `top_p` of `.95`. - For increased accuracy, append `Think carefully.` at the end of your messages. ## Dataset / Report Both the dataset and a brief report detailing how we trained this model will be released next week, alongside our Reflection 405B model that we expect will be the top-performing LLM in the world, including closed-source models. --- Thanks to Jason Kuperberg and Josh Bickett from the [HyperWrite](https://hyperwriteai.com) team for reviewing drafts of the report we'll be releasing next week. Also, we know right now the model is split into a ton of files. We'll condense this soon to make the model easier to download and work with!
yemen2016/memobert3_ED
yemen2016
2024-09-09T11:57:04Z
14
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:MiMe-MeMo/MeMo-BERT-03", "base_model:finetune:MiMe-MeMo/MeMo-BERT-03", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-09-09T11:37:07Z
--- library_name: transformers base_model: MiMe-MeMo/MeMo-BERT-03 tags: - generated_from_trainer model-index: - name: memobert3_ED results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # memobert3_ED This model is a fine-tuned version of [MiMe-MeMo/MeMo-BERT-03](https://huggingface.co/MiMe-MeMo/MeMo-BERT-03) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7895 - F1-score: 0.9012 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1-score | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 69 | 0.4129 | 0.8180 | | No log | 2.0 | 138 | 0.5440 | 0.8511 | | No log | 3.0 | 207 | 0.6458 | 0.8767 | | No log | 4.0 | 276 | 0.6689 | 0.8683 | | No log | 5.0 | 345 | 0.7171 | 0.8848 | | No log | 6.0 | 414 | 1.0837 | 0.8585 | | No log | 7.0 | 483 | 0.7652 | 0.8848 | | 0.1451 | 8.0 | 552 | 0.7895 | 0.9012 | | 0.1451 | 9.0 | 621 | 0.8248 | 0.8929 | | 0.1451 | 10.0 | 690 | 0.8456 | 0.8929 | | 0.1451 | 11.0 | 759 | 0.8626 | 0.8929 | | 0.1451 | 12.0 | 828 | 0.8791 | 0.8929 | | 0.1451 | 13.0 | 897 | 0.8929 | 0.8845 | | 0.1451 | 14.0 | 966 | 0.9028 | 0.8845 | | 0.0001 | 15.0 | 1035 | 0.9134 | 0.8845 | | 0.0001 | 16.0 | 1104 | 0.9205 | 0.8845 | | 0.0001 | 17.0 | 1173 | 0.9262 | 0.8845 | | 0.0001 | 18.0 | 1242 | 0.9309 | 0.8845 | | 0.0001 | 19.0 | 1311 | 0.9331 | 0.8845 | | 0.0001 | 20.0 | 1380 | 0.9337 | 0.8845 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
djovak/embedic-large
djovak
2024-09-09T11:34:35Z
1,721
5
sentence-transformers
[ "sentence-transformers", "safetensors", "xlm-roberta", "mteb", "feature-extraction", "sentence-similarity", "multilingual", "en", "sr", "base_model:intfloat/multilingual-e5-large", "base_model:finetune:intfloat/multilingual-e5-large", "license:mit", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-07-06T17:45:53Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - mteb - sentence-transformers - feature-extraction - sentence-similarity license: mit language: - multilingual - en - sr base_model: intfloat/multilingual-e5-large --- # djovak/embedic-large Say hello to **Embedić**, a group of new text embedding models finetuned for the Serbian language! These models are particularly useful in Information Retrieval and RAG purposes. Check out images showcasing benchmark performance, you can beat previous SOTA with 5x fewer parameters! Although specialized for Serbian(Cyrillic and Latin scripts), Embedić is Cross-lingual(it understands English too). So you can embed English docs, Serbian docs, or a combination of the two :) This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["ko je Nikola Tesla?", "Nikola Tesla je poznati pronalazač", "Nikola Jokić je poznati košarkaš"] model = SentenceTransformer('djovak/embedic-large') embeddings = model.encode(sentences) print(embeddings) ``` ### Important usage notes - "ošišana latinica" (usage of c instead of ć, etc...) significantly deacreases search quality - The usage of uppercase letters for named entities can significantly improve search quality ## Training - Embedić models are fine-tuned from multilingual-e5 models and they come in 3 sizes (small, base, large). - Training is done on a single 4070ti super GPU - 3-step training: distillation, training on (query, text) pairs and finally fine-tuning with triplets. ## Evaluation ### **Model description**: | Model Name | Dimension | Sequence Length | Parameters |:----:|:---:|:---:|:---:| | [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) | 384 | 512 | 117M | [djovak/embedic-small](https://huggingface.co/djovak/embedic-small) | 384 | 512 | 117M ||||||||| | [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 768 | 512 | 278M | [djovak/embedic-base](https://huggingface.co/djovak/embedic-base) | 768 | 512 | 278M ||||||||| | [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 1024 | 512 | 560M | [djovak/embedic-large](https://huggingface.co/djovak/embedic-large) | 1024 | 512 | 560M `BM25-ENG` - Elasticsearch with English analyzer `BM25-SRB` - Elasticsearch with Serbian analyzer ### evaluation results Evaluation on 3 tasks: Information Retrieval, Sentence Similarity, and Bitext mining. I personally translated the STS17 cross-lingual evaluation dataset and Spent 6,000$ on Google translate API, translating 4 IR evaluation datasets into Serbian language. Evaluation datasets will be published as Part of [MTEB benchmark](https://huggingface.co/spaces/mteb/leaderboard) in the near future. ![information retrieval results](image-2.png) ![sentence similarity results](image-1.png) ## Contact If you have any question or sugestion related to this project, you can open an issue or pull request. You can also email me at [email protected] ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## License Embedić models are licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
cmrmedsnn/dummy-model
cmrmedsnn
2024-09-09T11:22:45Z
67
0
transformers
[ "transformers", "safetensors", "camembert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-09-09T11:09:53Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
yeniguno/democracy-sentiment-analysis-turkish-roberta
yeniguno
2024-09-09T11:20:03Z
84
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "tr", "base_model:cardiffnlp/twitter-xlm-roberta-base-sentiment-multilingual", "base_model:finetune:cardiffnlp/twitter-xlm-roberta-base-sentiment-multilingual", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-09-09T10:27:41Z
--- library_name: transformers base_model: cardiffnlp/twitter-xlm-roberta-base-sentiment-multilingual tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: democracy-sentiment-analysis-turkish-roberta results: [] license: mit language: - tr --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # democracy-sentiment-analysis-turkish-roberta This model is a fine-tuned version of [cardiffnlp/twitter-xlm-roberta-base-sentiment-multilingual](https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base-sentiment-multilingual) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4469 - Accuracy: 0.8184 - F1: 0.8186 - Precision: 0.8224 - Recall: 0.8184 ## Model description This model is fine-tuned from the base model cardiffnlp/twitter-xlm-roberta-base-sentiment-multilingual for sentiment analysis in Turkish, specifically focusing on democracy-related text. The model classifies texts into three sentiment categories: Positive Neutral Negative ## Intended uses & limitations This model is well-suited for analyzing sentiments in Turkish texts that discuss democracy, governance, and related political discourse. ## Training and evaluation data The training dataset consists of 30,000 rows gathered from various sources, including: Kaggle, Hugging Face, Ekşi Sözlük, and synthetic data generated using state-of-the-art LLMs. The dataset is multilingual in origin, with texts in English, Russian, and Turkish. All non-Turkish texts were translated into Turkish. The data represents a broad spectrum of democratic discourse from 30 different sources. ## How to Use To use this model for sentiment analysis, you can leverage the Hugging Face `pipeline` for text classification as shown below: ```python from transformers import pipeline # Load the model from Hugging Face sentiment_model = pipeline(model="yeniguno/democracy-sentiment-analysis-turkish-roberta", task='text-classification') # Example text input response = sentiment_model("En iyisi devletin tüm gücünü tek bir lidere verelim") # Print the result print(response) # [{'label': 'negative', 'score': 0.9617443084716797}] # Example text input response = sentiment_model("Birçok farklı sesin çıkması zaman alıcı ve karmaşık görünebilir, ancak demokrasinin getirdiği özgürlük ve çeşitlilik, toplumun gerçek gücüdür.") # Print the result print(response) # [{'label': 'positive', 'score': 0.958978533744812}] # Example text input response = sentiment_model("Bugün hava yağmurlu.") # Print the result print(response) # [{'label': 'neutral', 'score': 0.9915837049484253}] ``` ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.7236 | 1.0 | 802 | 0.4797 | 0.8039 | 0.8031 | 0.8037 | 0.8039 | | 0.424 | 2.0 | 1604 | 0.4469 | 0.8184 | 0.8186 | 0.8224 | 0.8184 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
benayas/distilbert-base-uncased-tweet_eval-v0
benayas
2024-09-09T11:19:44Z
67
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-09-09T11:19:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ijohn07/DeepSeek-V2-Lite-Chat-Q6_K-GGUF
ijohn07
2024-09-09T10:31:10Z
173
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:deepseek-ai/DeepSeek-V2-Lite-Chat", "base_model:quantized:deepseek-ai/DeepSeek-V2-Lite-Chat", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
2024-09-09T10:30:15Z
--- base_model: deepseek-ai/DeepSeek-V2-Lite-Chat license: other license_name: deepseek license_link: https://github.com/deepseek-ai/DeepSeek-V2/blob/main/LICENSE-MODEL tags: - llama-cpp - gguf-my-repo --- # ijohn07/DeepSeek-V2-Lite-Chat-Q6_K-GGUF This model was converted to GGUF format from [`deepseek-ai/DeepSeek-V2-Lite-Chat`](https://huggingface.co/deepseek-ai/DeepSeek-V2-Lite-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/deepseek-ai/DeepSeek-V2-Lite-Chat) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo ijohn07/DeepSeek-V2-Lite-Chat-Q6_K-GGUF --hf-file deepseek-v2-lite-chat-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo ijohn07/DeepSeek-V2-Lite-Chat-Q6_K-GGUF --hf-file deepseek-v2-lite-chat-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo ijohn07/DeepSeek-V2-Lite-Chat-Q6_K-GGUF --hf-file deepseek-v2-lite-chat-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo ijohn07/DeepSeek-V2-Lite-Chat-Q6_K-GGUF --hf-file deepseek-v2-lite-chat-q6_k.gguf -c 2048 ```
YYYYYYibo/gshf_ours_1_iter_2
YYYYYYibo
2024-09-09T10:19:36Z
8
0
null
[ "safetensors", "mistral", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "dataset:updated", "dataset:original", "base_model:YYYYYYibo/full_vanilla_dpo_iter_1", "base_model:finetune:YYYYYYibo/full_vanilla_dpo_iter_1", "license:apache-2.0", "region:us" ]
null
2024-09-09T05:03:44Z
--- license: apache-2.0 base_model: YYYYYYibo/full_vanilla_dpo_iter_1 tags: - alignment-handbook - trl - dpo - generated_from_trainer - trl - dpo - generated_from_trainer datasets: - updated - original model-index: - name: gshf_ours_1_iter_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gshf_ours_1_iter_2 This model is a fine-tuned version of [YYYYYYibo/full_vanilla_dpo_iter_1](https://huggingface.co/YYYYYYibo/full_vanilla_dpo_iter_1) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.3.1+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
vishalkatheriya18/convnextv2-tiny-1k-224-finetuned-topwear
vishalkatheriya18
2024-09-09T10:15:47Z
14
0
null
[ "safetensors", "convnextv2", "generated_from_trainer", "dataset:imagefolder", "base_model:facebook/convnextv2-tiny-1k-224", "base_model:finetune:facebook/convnextv2-tiny-1k-224", "license:apache-2.0", "model-index", "region:us" ]
null
2024-09-09T10:15:41Z
--- license: apache-2.0 base_model: facebook/convnextv2-tiny-1k-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: convnextv2-tiny-1k-224-finetuned-topwear results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.8388888888888889 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # convnextv2-tiny-1k-224-finetuned-topwear This model is a fine-tuned version of [facebook/convnextv2-tiny-1k-224](https://huggingface.co/facebook/convnextv2-tiny-1k-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.6478 - Accuracy: 0.8389 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 120 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:--------:|:----:|:---------------:|:--------:| | 2.7006 | 0.9412 | 12 | 2.6782 | 0.1167 | | 2.6863 | 1.9608 | 25 | 2.6272 | 0.1611 | | 2.6437 | 2.9804 | 38 | 2.5389 | 0.2889 | | 2.4851 | 4.0 | 51 | 2.4116 | 0.4111 | | 2.3732 | 4.9412 | 63 | 2.2707 | 0.4889 | | 2.2546 | 5.9608 | 76 | 2.0710 | 0.5722 | | 2.1023 | 6.9804 | 89 | 1.8371 | 0.6167 | | 1.7115 | 8.0 | 102 | 1.6161 | 0.6111 | | 1.5295 | 8.9412 | 114 | 1.4381 | 0.6278 | | 1.3366 | 9.9608 | 127 | 1.2540 | 0.65 | | 1.0556 | 10.9804 | 140 | 1.1632 | 0.6611 | | 0.9657 | 12.0 | 153 | 1.0600 | 0.7 | | 0.8703 | 12.9412 | 165 | 0.9983 | 0.7222 | | 0.8007 | 13.9608 | 178 | 0.9474 | 0.7278 | | 0.6398 | 14.9804 | 191 | 0.8634 | 0.75 | | 0.6023 | 16.0 | 204 | 0.8527 | 0.7278 | | 0.583 | 16.9412 | 216 | 0.7928 | 0.7667 | | 0.5279 | 17.9608 | 229 | 0.7897 | 0.7833 | | 0.4643 | 18.9804 | 242 | 0.7886 | 0.7667 | | 0.4296 | 20.0 | 255 | 0.7329 | 0.7833 | | 0.41 | 20.9412 | 267 | 0.7317 | 0.7611 | | 0.3674 | 21.9608 | 280 | 0.7171 | 0.7667 | | 0.3285 | 22.9804 | 293 | 0.7005 | 0.7833 | | 0.2978 | 24.0 | 306 | 0.6576 | 0.7889 | | 0.293 | 24.9412 | 318 | 0.6450 | 0.8 | | 0.2724 | 25.9608 | 331 | 0.6765 | 0.7889 | | 0.2494 | 26.9804 | 344 | 0.6826 | 0.8056 | | 0.2504 | 28.0 | 357 | 0.6710 | 0.8056 | | 0.2332 | 28.9412 | 369 | 0.6667 | 0.7778 | | 0.2012 | 29.9608 | 382 | 0.7399 | 0.7944 | | 0.1866 | 30.9804 | 395 | 0.7311 | 0.7833 | | 0.2031 | 32.0 | 408 | 0.7077 | 0.7944 | | 0.1969 | 32.9412 | 420 | 0.7769 | 0.7667 | | 0.1968 | 33.9608 | 433 | 0.7666 | 0.7833 | | 0.1712 | 34.9804 | 446 | 0.6796 | 0.8 | | 0.1813 | 36.0 | 459 | 0.6654 | 0.8111 | | 0.1678 | 36.9412 | 471 | 0.6851 | 0.7889 | | 0.1461 | 37.9608 | 484 | 0.7054 | 0.7833 | | 0.1244 | 38.9804 | 497 | 0.7013 | 0.8056 | | 0.1329 | 40.0 | 510 | 0.6785 | 0.8 | | 0.1186 | 40.9412 | 522 | 0.7500 | 0.7778 | | 0.1397 | 41.9608 | 535 | 0.6819 | 0.8167 | | 0.1324 | 42.9804 | 548 | 0.6257 | 0.8111 | | 0.111 | 44.0 | 561 | 0.5939 | 0.8278 | | 0.1228 | 44.9412 | 573 | 0.6379 | 0.8222 | | 0.1085 | 45.9608 | 586 | 0.6789 | 0.8222 | | 0.1234 | 46.9804 | 599 | 0.6241 | 0.8278 | | 0.1129 | 48.0 | 612 | 0.7503 | 0.7889 | | 0.1197 | 48.9412 | 624 | 0.6862 | 0.7944 | | 0.0898 | 49.9608 | 637 | 0.6764 | 0.7889 | | 0.1057 | 50.9804 | 650 | 0.6339 | 0.8167 | | 0.0893 | 52.0 | 663 | 0.5828 | 0.85 | | 0.0736 | 52.9412 | 675 | 0.6573 | 0.8111 | | 0.0752 | 53.9608 | 688 | 0.6806 | 0.7944 | | 0.1127 | 54.9804 | 701 | 0.6222 | 0.8111 | | 0.1126 | 56.0 | 714 | 0.6305 | 0.8167 | | 0.0874 | 56.9412 | 726 | 0.6593 | 0.8111 | | 0.0806 | 57.9608 | 739 | 0.7006 | 0.8167 | | 0.0978 | 58.9804 | 752 | 0.6680 | 0.8056 | | 0.0875 | 60.0 | 765 | 0.6739 | 0.8167 | | 0.0722 | 60.9412 | 777 | 0.6341 | 0.8333 | | 0.0942 | 61.9608 | 790 | 0.6428 | 0.8 | | 0.0957 | 62.9804 | 803 | 0.6758 | 0.8 | | 0.0814 | 64.0 | 816 | 0.6104 | 0.8167 | | 0.077 | 64.9412 | 828 | 0.6226 | 0.8111 | | 0.1004 | 65.9608 | 841 | 0.6899 | 0.8056 | | 0.0697 | 66.9804 | 854 | 0.7105 | 0.8167 | | 0.0754 | 68.0 | 867 | 0.6751 | 0.8111 | | 0.0842 | 68.9412 | 879 | 0.6912 | 0.7833 | | 0.0684 | 69.9608 | 892 | 0.7235 | 0.8167 | | 0.0684 | 70.9804 | 905 | 0.5840 | 0.8278 | | 0.0705 | 72.0 | 918 | 0.6636 | 0.8222 | | 0.0681 | 72.9412 | 930 | 0.6787 | 0.8 | | 0.0906 | 73.9608 | 943 | 0.6243 | 0.8389 | | 0.0453 | 74.9804 | 956 | 0.6787 | 0.8222 | | 0.0874 | 76.0 | 969 | 0.6259 | 0.8278 | | 0.051 | 76.9412 | 981 | 0.6590 | 0.8278 | | 0.0858 | 77.9608 | 994 | 0.6307 | 0.8278 | | 0.0601 | 78.9804 | 1007 | 0.6042 | 0.8444 | | 0.0601 | 80.0 | 1020 | 0.5875 | 0.8389 | | 0.067 | 80.9412 | 1032 | 0.6078 | 0.8389 | | 0.0556 | 81.9608 | 1045 | 0.6007 | 0.8444 | | 0.0661 | 82.9804 | 1058 | 0.6062 | 0.8333 | | 0.0651 | 84.0 | 1071 | 0.6387 | 0.8111 | | 0.0546 | 84.9412 | 1083 | 0.6861 | 0.8167 | | 0.0827 | 85.9608 | 1096 | 0.6073 | 0.8389 | | 0.052 | 86.9804 | 1109 | 0.5935 | 0.85 | | 0.0524 | 88.0 | 1122 | 0.5899 | 0.8389 | | 0.066 | 88.9412 | 1134 | 0.5954 | 0.8444 | | 0.0617 | 89.9608 | 1147 | 0.6145 | 0.8444 | | 0.0572 | 90.9804 | 1160 | 0.6176 | 0.8444 | | 0.0719 | 92.0 | 1173 | 0.6406 | 0.8278 | | 0.0734 | 92.9412 | 1185 | 0.6485 | 0.8333 | | 0.0616 | 93.9608 | 1198 | 0.6198 | 0.8333 | | 0.0557 | 94.9804 | 1211 | 0.6167 | 0.8389 | | 0.0494 | 96.0 | 1224 | 0.6480 | 0.8444 | | 0.0587 | 96.9412 | 1236 | 0.6076 | 0.85 | | 0.052 | 97.9608 | 1249 | 0.6512 | 0.8389 | | 0.0383 | 98.9804 | 1262 | 0.6782 | 0.8333 | | 0.0499 | 100.0 | 1275 | 0.6542 | 0.8278 | | 0.0511 | 100.9412 | 1287 | 0.6795 | 0.8389 | | 0.0452 | 101.9608 | 1300 | 0.6740 | 0.8333 | | 0.0475 | 102.9804 | 1313 | 0.6616 | 0.8389 | | 0.0455 | 104.0 | 1326 | 0.6490 | 0.8278 | | 0.0486 | 104.9412 | 1338 | 0.6331 | 0.8333 | | 0.0585 | 105.9608 | 1351 | 0.6299 | 0.8333 | | 0.0549 | 106.9804 | 1364 | 0.6398 | 0.8278 | | 0.0436 | 108.0 | 1377 | 0.6338 | 0.8444 | | 0.0429 | 108.9412 | 1389 | 0.6459 | 0.8389 | | 0.0449 | 109.9608 | 1402 | 0.6470 | 0.8444 | | 0.0559 | 110.9804 | 1415 | 0.6463 | 0.8389 | | 0.0378 | 112.0 | 1428 | 0.6480 | 0.8389 | | 0.0476 | 112.9412 | 1440 | 0.6478 | 0.8389 | ### Framework versions - Transformers 4.44.0 - Pytorch 2.4.0 - Datasets 2.21.0 - Tokenizers 0.19.1
smusa/chatbot-bert-classification
smusa
2024-09-09T10:15:43Z
23,385
1
null
[ "safetensors", "bert", "license:apache-2.0", "region:us" ]
null
2024-09-09T10:10:11Z
--- license: apache-2.0 ---
koesn/Llama-3.1-8B-Lexi-Uncensored-GGUF
koesn
2024-09-09T10:13:00Z
125
0
null
[ "gguf", "license:llama3.1", "endpoints_compatible", "region:us", "conversational" ]
null
2024-09-09T04:13:11Z
--- license: llama3.1 --- # Llama-3.1-8B-Lexi-Uncensored ## Description This repo contains GGUF format model files for Llama-3.1-8B-Lexi-Uncensored. ## Files Provided | Name | Quant | Bits | File Size | Remark | | -------------------------------------- | ----- | ---- | --------- | -------------------------------- | | llama-3.1-8b-lexi-uncensored.Q2_K.gguf | Q2_K | 2 | 3.18 GB | 2.96G, +3.5199 ppl @ Llama-3-8B | | llama-3.1-8b-lexi-uncensored.Q3_K.gguf | Q3_K | 3 | 4.02 GB | 3.74G, +0.6569 ppl @ Llama-3-8B | | llama-3.1-8b-lexi-uncensored.Q4_0.gguf | Q4_0 | 4 | 4.66 GB | 4.34G, +0.4685 ppl @ Llama-3-8B | | llama-3.1-8b-lexi-uncensored.Q4_K.gguf | Q4_K | 4 | 4.92 GB | 4.58G, +0.1754 ppl @ Llama-3-8B | | llama-3.1-8b-lexi-uncensored.Q5_K.gguf | Q5_K | 5 | 5.73 GB | 5.33G, +0.0569 ppl @ Llama-3-8B | | llama-3.1-8b-lexi-uncensored.Q6_K.gguf | Q6_K | 6 | 6.60 GB | 6.14G, +0.0217 ppl @ Llama-3-8B | | llama-3.1-8b-lexi-uncensored.Q8_0.gguf | Q8_0 | 8 | 8.54 GB | 7.96G, +0.0026 ppl @ Llama-3-8B | ## Parameters | path | type | architecture | rope_theta | sliding_win | max_pos_embed | | ---------------------------------- | ----- | ---------------- | ---------- | ----------- | ------------- | | unsloth/meta-llama-3.1-8b-instruct | llama | LlamaForCausalLM | 500000.0 | null | 131072 | ## Benchmark ![](https://i.ibb.co.com/VVjGvX5/Tangkapan-Layar-2024-09-09-pukul-17-10-29.png) # Original Model Card LLM Leaderboard 2 results: --- Lexi suggests that simply uncensoring the LLM makes it smarter. The dataset used to tune this model does not contain any "new knowledge" or any contamination whatsoever, yet, we see the evaluation scores shot up when we get rid of biases and refusals. Lexi not only retains the original instruct, but it beats it. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/644ad182f434a6a63b18eee6/4S66-wmaZf-xX_HzQQMNg.png) NOTE: UGI Leaderboard The UGI Leaderboard runs the Q4 for its evaluations which results in bad results for this model. As noted, the Q4 has issues retaining the fine tuning for some reasons ends up not as good, which will be fixed for V3. V2 has been released, I recommend you download the new version: --- https://huggingface.co/Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/644ad182f434a6a63b18eee6/92b8rBqTPfKhPSwJUE1Rq.png) This model is based on Llama-3.1-8b-Instruct, and is governed by [META LLAMA 3.1 COMMUNITY LICENSE AGREEMENT](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE) Lexi is uncensored, which makes the model compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. You are responsible for any content you create using this model. Please use it responsibly. Lexi is licensed according to Meta's Llama license. I grant permission for any use, including commercial, that falls within accordance with Meta's Llama-3 license. IMPORTANT: --- Use the same template as the official Llama 3.1 8B instruct. System tokens must be present during inference, even if you set an empty system message. If you are unsure, just add a short system message as you wish. Feedback: --- If you find any issues or have suggestions for improvements, feel free to leave a review and I will look into it for upcoming improvements and next version. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/644ad182f434a6a63b18eee6/uqJv-R1LeJEfMxi1nmTH5.png)
ijohn07/DeepSeek-V2-Lite-Q6_K-GGUF
ijohn07
2024-09-09T10:08:59Z
37
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:deepseek-ai/DeepSeek-V2-Lite", "base_model:quantized:deepseek-ai/DeepSeek-V2-Lite", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
2024-09-09T10:08:00Z
--- base_model: deepseek-ai/DeepSeek-V2-Lite license: other license_name: deepseek license_link: https://github.com/deepseek-ai/DeepSeek-V2/blob/main/LICENSE-MODEL tags: - llama-cpp - gguf-my-repo --- # ijohn07/DeepSeek-V2-Lite-Q6_K-GGUF This model was converted to GGUF format from [`deepseek-ai/DeepSeek-V2-Lite`](https://huggingface.co/deepseek-ai/DeepSeek-V2-Lite) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/deepseek-ai/DeepSeek-V2-Lite) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo ijohn07/DeepSeek-V2-Lite-Q6_K-GGUF --hf-file deepseek-v2-lite-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo ijohn07/DeepSeek-V2-Lite-Q6_K-GGUF --hf-file deepseek-v2-lite-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo ijohn07/DeepSeek-V2-Lite-Q6_K-GGUF --hf-file deepseek-v2-lite-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo ijohn07/DeepSeek-V2-Lite-Q6_K-GGUF --hf-file deepseek-v2-lite-q6_k.gguf -c 2048 ```
elasticBottle/tiger-thermos-lora
elasticBottle
2024-09-09T10:02:09Z
6
0
diffusers
[ "diffusers", "flux", "flux-diffusers", "text-to-image", "simpletuner", "safe-for-work", "lora", "template:sd-lora", "standard", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-09-09T06:08:50Z
--- license: other base_model: "black-forest-labs/FLUX.1-dev" tags: - flux - flux-diffusers - text-to-image - diffusers - simpletuner - safe-for-work - lora - template:sd-lora - standard inference: true widget: - text: 'unconditional (blank prompt)' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_0_0.png - text: 'TIGER_THERMOS_BOTTLE. A breathtaking anime-style portrait of a gray thermos bottle, capturing its essence with vibrant colors and expressive features. There is a black "TIGER" logo at the bottom of the thermos bottle' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_1_0.png - text: 'TIGER_THERMOS_BOTTLE. A high-quality, detailed photograph of a gray thermos bottle next to a sous-chef, immersed in the art of culinary creation. There is a black "TIGER" logo at the bottom of the thermos bottle' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_2_0.png - text: 'TIGER_THERMOS_BOTTLE. A lifelike and intimate portrait of a blue thermos bottle, showcasing its unique personality and charm. There is a black "TIGER" logo at the bottom of the thermos bottle' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_3_0.png - text: 'TIGER_THERMOS_BOTTLE. A cinematic, visually stunning photo ofa gray thermos bottle, emphasizing her dramatic and captivating presence. There is a black "TIGER" logo at the bottom of the thermos bottle' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_4_0.png - text: 'TIGER_THERMOS_BOTTLE. An elegant and timeless portrait of a gray thermos bottle, exuding grace and sophistication. There is a black "TIGER" logo at the bottom of the thermos bottle' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_5_0.png - text: 'TIGER_THERMOS_BOTTLE. A dynamic and adventurous photo of a gray thermos bottle, captured in an exciting, action-filled moment. There is a black "TIGER" logo at the bottom of the thermos bottle' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_6_0.png - text: 'TIGER_THERMOS_BOTTLE. A mysterious and enigmatic portrait of a purple thermos bottle, shrouded in shadows and intrigue. There is a black "TIGER" logo at the bottom of the thermos bottle' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_7_0.png - text: 'TIGER_THERMOS_BOTTLE. A vintage-style portrait of a gray thermos bottle, evoking the charm and nostalgia of a bygone era. There is a black "TIGER" logo at the bottom of the thermos bottle' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_8_0.png - text: 'TIGER_THERMOS_BOTTLE. An artistic and abstract representation of a yellow thermos bottle, blending creativity with visual storytelling. There is a black "TIGER" logo at the bottom of the thermos bottle' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_9_0.png - text: 'TIGER_THERMOS_BOTTLE. A futuristic and cutting-edge portrayal of a pink gray thermos bottle, set against a backdrop of advanced technology. There is a black "TIGER" logo at the bottom of the thermos bottle' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_10_0.png - text: 'a beautifully crafted portrait of a woman, highlighting her natural beauty and unique features' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_11_0.png - text: 'a powerful and striking portrait of a man, capturing his strength and character' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_12_0.png - text: 'a playful and spirited portrait of a boy, capturing youthful energy and innocence' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_13_0.png - text: 'a charming and vibrant portrait of a girl, emphasizing her bright personality and joy' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_14_0.png - text: 'a heartwarming and cohesive family portrait, showcasing the bonds and connections between loved ones' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_15_0.png - text: 'TIGER_THERMOS_BOTTLE. A photo-realistic image of a gray thermos bottle on a while tabletop with mountains in the background. There is a black "TIGER" logo at the bottom of the grey thermos bottle' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_16_0.png --- # tiger-thermos-lora This is a standard PEFT LoRA derived from [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev). The main validation prompt used during training was: ``` TIGER_THERMOS_BOTTLE. A photo-realistic image of a gray thermos bottle on a while tabletop with mountains in the background. There is a black "TIGER" logo at the bottom of the grey thermos bottle ``` ## Validation settings - CFG: `3.5` - CFG Rescale: `0.0` - Steps: `20` - Sampler: `None` - Seed: `42` - Resolution: `512x512` Note: The validation settings are not necessarily the same as the [training settings](#training-settings). You can find some example images in the following gallery: <Gallery /> The text encoder **was not** trained. You may reuse the base model text encoder for inference. ## Training settings - Training epochs: 15 - Training steps: 1000 - Learning rate: 0.00013 - Effective batch size: 2 - Micro-batch size: 2 - Gradient accumulation steps: 1 - Number of GPUs: 1 - Prediction type: flow-matching - Rescaled betas zero SNR: False - Optimizer: adamw_bf16 - Precision: Pure BF16 - Quantised: No - Xformers: Not used - LoRA Rank: 16 - LoRA Alpha: None - LoRA Dropout: 0.1 - LoRA initialisation style: default ## Datasets ### default_dataset - Repeats: 10 - Total number of images: 12 - Total number of aspect buckets: 1 - Resolution: 0.147456 megapixels - Cropped: True - Crop style: center - Crop aspect: square ## Inference ```python import torch from diffusers import DiffusionPipeline model_id = 'black-forest-labs/FLUX.1-dev' adapter_id = 'elasticBottle/tiger-thermos-lora' pipeline = DiffusionPipeline.from_pretrained(model_id) pipeline.load_lora_weights(adapter_id) prompt = "TIGER_THERMOS_BOTTLE. A photo-realistic image of a gray thermos bottle on a while tabletop with mountains in the background. There is a black "TIGER" logo at the bottom of the grey thermos bottle" pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu') image = pipeline( prompt=prompt, num_inference_steps=20, generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(1641421826), width=512, height=512, guidance_scale=3.5, ).images[0] image.save("output.png", format="PNG") ```
downtown1/Qwen-Qwen1.5-1.8B-1725876091
downtown1
2024-09-09T10:01:35Z
5
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-1.8B", "base_model:adapter:Qwen/Qwen1.5-1.8B", "region:us" ]
null
2024-09-09T10:01:31Z
--- base_model: Qwen/Qwen1.5-1.8B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
Triangle104/NeuralDaredevil-8B-abliterated-Q4_0-GGUF
Triangle104
2024-09-09T09:51:32Z
39
0
null
[ "gguf", "dpo", "llama-cpp", "gguf-my-repo", "dataset:mlabonne/orpo-dpo-mix-40k", "base_model:mlabonne/NeuralDaredevil-8B-abliterated", "base_model:quantized:mlabonne/NeuralDaredevil-8B-abliterated", "license:llama3", "model-index", "endpoints_compatible", "region:us", "conversational" ]
null
2024-09-09T09:51:09Z
--- base_model: mlabonne/NeuralDaredevil-8B-abliterated datasets: - mlabonne/orpo-dpo-mix-40k license: llama3 tags: - dpo - llama-cpp - gguf-my-repo model-index: - name: Daredevil-8B-abliterated-dpomix results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 69.28 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Daredevil-8B-abliterated-dpomix name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 85.05 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Daredevil-8B-abliterated-dpomix name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 69.1 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Daredevil-8B-abliterated-dpomix name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 60 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Daredevil-8B-abliterated-dpomix name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 78.69 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Daredevil-8B-abliterated-dpomix name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 71.8 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Daredevil-8B-abliterated-dpomix name: Open LLM Leaderboard --- # Triangle104/NeuralDaredevil-8B-abliterated-Q4_0-GGUF This model was converted to GGUF format from [`mlabonne/NeuralDaredevil-8B-abliterated`](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/NeuralDaredevil-8B-abliterated-Q4_0-GGUF --hf-file neuraldaredevil-8b-abliterated-q4_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/NeuralDaredevil-8B-abliterated-Q4_0-GGUF --hf-file neuraldaredevil-8b-abliterated-q4_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/NeuralDaredevil-8B-abliterated-Q4_0-GGUF --hf-file neuraldaredevil-8b-abliterated-q4_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/NeuralDaredevil-8B-abliterated-Q4_0-GGUF --hf-file neuraldaredevil-8b-abliterated-q4_0.gguf -c 2048 ```
distilled-ai/general-persona-chatbot-v1-09-09-24
distilled-ai
2024-09-09T09:49:40Z
46
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-09T09:39:55Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/MN-Maghin-12B-i1-GGUF
mradermacher
2024-09-09T09:40:21Z
27
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:rityak/MN-Maghin-12B", "base_model:quantized:rityak/MN-Maghin-12B", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-09-09T06:04:25Z
--- base_model: rityak/MN-Maghin-12B language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/rityak/MN-Maghin-12B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/MN-Maghin-12B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MN-Maghin-12B-i1-GGUF/resolve/main/MN-Maghin-12B.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/MN-Maghin-12B-i1-GGUF/resolve/main/MN-Maghin-12B.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/MN-Maghin-12B-i1-GGUF/resolve/main/MN-Maghin-12B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/MN-Maghin-12B-i1-GGUF/resolve/main/MN-Maghin-12B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/MN-Maghin-12B-i1-GGUF/resolve/main/MN-Maghin-12B.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/MN-Maghin-12B-i1-GGUF/resolve/main/MN-Maghin-12B.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/MN-Maghin-12B-i1-GGUF/resolve/main/MN-Maghin-12B.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/MN-Maghin-12B-i1-GGUF/resolve/main/MN-Maghin-12B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MN-Maghin-12B-i1-GGUF/resolve/main/MN-Maghin-12B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/MN-Maghin-12B-i1-GGUF/resolve/main/MN-Maghin-12B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/MN-Maghin-12B-i1-GGUF/resolve/main/MN-Maghin-12B.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/MN-Maghin-12B-i1-GGUF/resolve/main/MN-Maghin-12B.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/MN-Maghin-12B-i1-GGUF/resolve/main/MN-Maghin-12B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/MN-Maghin-12B-i1-GGUF/resolve/main/MN-Maghin-12B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/MN-Maghin-12B-i1-GGUF/resolve/main/MN-Maghin-12B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/MN-Maghin-12B-i1-GGUF/resolve/main/MN-Maghin-12B.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/MN-Maghin-12B-i1-GGUF/resolve/main/MN-Maghin-12B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/MN-Maghin-12B-i1-GGUF/resolve/main/MN-Maghin-12B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MN-Maghin-12B-i1-GGUF/resolve/main/MN-Maghin-12B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/MN-Maghin-12B-i1-GGUF/resolve/main/MN-Maghin-12B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/MN-Maghin-12B-i1-GGUF/resolve/main/MN-Maghin-12B.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
bumblebee-testing/tiny-random-Phi3Model-rope_scaling-longrope-original_max_position_embeddings-256
bumblebee-testing
2024-09-09T09:38:35Z
96
0
transformers
[ "transformers", "safetensors", "phi3", "feature-extraction", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-09-09T09:38:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tartuNLP/EstBERT_NER_v2
tartuNLP
2024-09-09T09:38:22Z
134
0
transformers
[ "transformers", "pytorch", "safetensors", "bert", "token-classification", "et", "dataset:tartuNLP/EstNER", "arxiv:2011.04784", "base_model:tartuNLP/EstBERT", "base_model:finetune:tartuNLP/EstBERT", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-05-03T07:54:05Z
--- language: et license: cc-by-4.0 widget: - text: Eesti President on Alar Karis. base_model: tartuNLP/EstBERT datasets: - tartuNLP/EstNER --- # Estonian NER model based on EstBERT This model is a fine-tuned version of [tartuNLP/EstBERT](https://huggingface.co/tartuNLP/EstBERT) on the Estonian NER dataset. The model was trained by tartuNLP, the NLP research group at the institute of Computer Science at the University of Tartu. It achieves the following results on the test set: - Loss: 0.3565 - Precision: 0.7612 - Recall: 0.7744 - F1: 0.7678 - Accuracy: 0.9672 The entity-level results are as follows: | | Precision | Recall | F1 | Number | |---------| --------- | ------- | ------- | ------- | | DATE | 0.7278 | 0.7258 | 0.7268 | 372 | | EVENT | 0.3721 | 0.5714 | 0.4507 | 28 | | GPE | 0.8679 | 0.8369 | 0.8521 | 840 | | LOC | 0.6545 | 0.4832 | 0.5560 | 149 | | MONEY | 0.6625 | 0.6023 | 0.6310 | 88 | | ORG | 0.6761 | 0.7267 | 0.7005 | 589 | | PER | 0.8255 | 0.9068 | 0.8642 | 751 | | PERCENT | 1.0 | 0.9589 | 0.9790 | 73 | | PROD | 0.6030 | 0.5430 | 0.5714 | 221 | | TIME | 0.5682 | 0.5556 | 0.5618 | 45 | | TITLE | 0.7 | 0.8063 | 0.7494 | 191 | ## How to use You can use this model with Transformers pipeline for NER. Post-processing of results may be necessary as the model occasionally tags subword tokens as entities. ``` from transformers import BertTokenizer, BertForTokenClassification from transformers import pipeline tokenizer = BertTokenizer.from_pretrained('tartuNLP/EstBERT_NER') bertner = BertForTokenClassification.from_pretrained('tartuNLP/EstBERT_NER') nlp = pipeline("ner", model=bertner, tokenizer=tokenizer) text = "Kaia Kanepi (WTA 57.) langes USA-s Charlestonis toimuval WTA 500 kategooria tenniseturniiril konkurentsist kaheksandikfinaalis, kaotades poolatarile Magda Linette'ile (WTA 64.) 3 : 6, 6 : 4, 2 : 6." ner_results = nlp(text) tokens=tokenizer(text) tokens=tokenizer.convert_ids_to_tokens(tokens['input_ids']) print(f'tokens: {tokens}') print(f'NER model:{ner_results}') ``` ``` tokens: ['[CLS]', 'kai', '##a', 'kanepi', '(', 'w', '##ta', '57', '.', ')', 'langes', 'usa', '-', 's', 'cha', '##rl', '##est', '##onis', 'toimuval', 'w', '##ta', '500', 'kategooria', 'tennise', '##turniiril', 'konkurentsist', 'kaheksandik', '##finaalis', ',', 'kaotades', 'poola', '##tari', '##le', 'ma', '##gda', 'line', '##tte', "'", 'ile', '(', 'w', '##ta', '64', '.', ')', '3', ':', '6', ',', '6', ':', '4', ',', '2', ':', '6', '.', '[SEP]'] ``` ``` NER model: [{'entity': 'B-PER', 'score': 0.99999887, 'index': 1, 'word': 'kai', 'start': None, 'end': None}, {'entity': 'B-PER', 'score': 0.97371966, 'index': 2, 'word': '##a', 'start': None, 'end': None}, {'entity': 'I-PER', 'score': 0.99999815, 'index': 3, 'word': 'kanepi', 'start': None, 'end': None}, {'entity': 'B-ORG', 'score': 0.63085276, 'index': 5, 'word': 'w', 'start': None, 'end': None}, {'entity': 'B-GPE', 'score': 0.99999934, 'index': 11, 'word': 'usa', 'start': None, 'end': None}, {'entity': 'B-GPE', 'score': 0.9999685, 'index': 14, 'word': 'cha', 'start': None, 'end': None}, {'entity': 'I-GPE', 'score': 0.8875574, 'index': 15, 'word': '##rl', 'start': None, 'end': None}, {'entity': 'I-GPE', 'score': 0.9996168, 'index': 16, 'word': '##est', 'start': None, 'end': None}, {'entity': 'I-GPE', 'score': 0.9992657, 'index': 17, 'word': '##onis', 'start': None, 'end': None}, {'entity': 'B-EVENT', 'score': 0.99999064, 'index': 19, 'word': 'w', 'start': None, 'end': None}, {'entity': 'I-EVENT', 'score': 0.9772493, 'index': 20, 'word': '##ta', 'start': None, 'end': None}, {'entity': 'I-EVENT', 'score': 0.99999076, 'index': 21, 'word': '500', 'start': None, 'end': None}, {'entity': 'I-EVENT', 'score': 0.99955636, 'index': 22, 'word': 'kategooria', 'start': None, 'end': None}, {'entity': 'B-TITLE', 'score': 0.8771319, 'index': 30, 'word': 'poola', 'start': None, 'end': None}, {'entity': 'B-PER', 'score': 0.99999785, 'index': 33, 'word': 'ma', 'start': None, 'end': None}, {'entity': 'B-PER', 'score': 0.9998398, 'index': 34, 'word': '##gda', 'start': None, 'end': None}, {'entity': 'I-PER', 'score': 0.9999987, 'index': 35, 'word': 'line', 'start': None, 'end': None}, {'entity': 'I-PER', 'score': 0.9999976, 'index': 36, 'word': '##tte', 'start': None, 'end': None}, {'entity': 'I-PER', 'score': 0.99999285, 'index': 37, 'word': "'", 'start': None, 'end': None}, {'entity': 'I-PER', 'score': 0.9999794, 'index': 38, 'word': 'ile', 'start': None, 'end': None}, {'entity': 'B-ORG', 'score': 0.7664479, 'index': 40, 'word': 'w', 'start': None, 'end': None}] ``` ## Intended uses & limitations This model can be used to find named entities from Estonian texts. The model is free to use for anyone. TartuNLP does not guarantee that the model is useful for anyone or anything. TartuNLP is not responsible for any results it generates. ## Training and evaluation data The model was trained on two Estonian NER datasets: - [The Reannotated Estonian NER corpus](https://metashare.ut.ee/repository/browse/reannotated-estonian-ner-corpus/bd43f1f614a511eca6e4fa163e9d45477d086613d2894fd5af79bf13e3f13594/) - [The New Estonian NER corpus](https://metashare.ut.ee/repository/browse/new-estonian-ner-corpus/98b6706c963c11eba6e4fa163e9d45470bcd0533b6994c93ab8b8c628516ffed/) Both datasets have been annotated with the same annotation scheme. For training this model, the datasets were joined. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 1024 - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06 - lr_scheduler_type: polynomial - max num_epochs: 150 - early stopping limit: 20 - early stopping tol: 0.0001 - mixed_precision_training: Native AMP ### Training results The final model was saved after epoch 53 (shown in bold) where the overall F1 was the highest on the development set. | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Date Precision | Date Recall | Date F1 | Date Number | Event Precision | Event Recall | Event F1 | Event Number | Gpe Precision | Gpe Recall | Gpe F1 | Gpe Number | Loc Precision | Loc Recall | Loc F1 | Loc Number | Money Precision | Money Recall | Money F1 | Money Number | Org Precision | Org Recall | Org F1 | Org Number | Per Precision | Per Recall | Per F1 | Per Number | Percent Precision | Percent Recall | Percent F1 | Percent Number | Prod Precision | Prod Recall | Prod F1 | Prod Number | Time Precision | Time Recall | Time F1 | Time Number | Title Precision | Title Recall | Title F1 | Title Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|:--------------:|:-----------:|:-------:|:-----------:|:---------------:|:------------:|:--------:|:------------:|:-------------:|:----------:|:------:|:----------:|:-------------:|:----------:|:------:|:----------:|:---------------:|:------------:|:--------:|:------------:|:-------------:|:----------:|:------:|:----------:|:-------------:|:----------:|:------:|:----------:|:-----------------:|:--------------:|:----------:|:--------------:|:--------------:|:-----------:|:-------:|:-----------:|:--------------:|:-----------:|:-------:|:-----------:|:---------------:|:------------:|:--------:|:------------:|:-----------------:|:--------------:|:----------:|:----------------:| | 0.3252 | 1 | 1061 | 0.1628 | 0.6835 | 0.6083 | 0.6437 | 0.9526 | 0.5910 | 0.6022 | 0.5965 | 372 | 0.0 | 0.0 | 0.0 | 28 | 0.8073 | 0.7631 | 0.7846 | 840 | 0.1389 | 0.0336 | 0.0541 | 149 | 0.4217 | 0.3977 | 0.4094 | 88 | 0.5381 | 0.5280 | 0.5330 | 589 | 0.7917 | 0.8655 | 0.8270 | 751 | 0.6471 | 0.3014 | 0.4112 | 73 | 0.2581 | 0.0724 | 0.1131 | 221 | 0.1429 | 0.0889 | 0.1096 | 45 | 0.7805 | 0.6702 | 0.7211 | 191 | 0.6835 | 0.6083 | 0.6437 | 0.9526 | | 0.1513 | 2 | 2122 | 0.1332 | 0.6906 | 0.7329 | 0.7111 | 0.9615 | 0.6185 | 0.7366 | 0.6724 | 372 | 0.0857 | 0.1071 | 0.0952 | 28 | 0.7874 | 0.8595 | 0.8219 | 840 | 0.4767 | 0.2752 | 0.3489 | 149 | 0.6848 | 0.7159 | 0.7000 | 88 | 0.6158 | 0.6231 | 0.6194 | 589 | 0.7770 | 0.9001 | 0.8341 | 751 | 0.9565 | 0.9041 | 0.9296 | 73 | 0.5 | 0.3620 | 0.4199 | 221 | 0.3571 | 0.3333 | 0.3448 | 45 | 0.6033 | 0.7644 | 0.6744 | 191 | 0.6906 | 0.7329 | 0.7111 | 0.9615 | | 0.1131 | 3 | 3183 | 0.1281 | 0.7224 | 0.7338 | 0.7280 | 0.9638 | 0.7054 | 0.7339 | 0.7194 | 372 | 0.1053 | 0.1429 | 0.1212 | 28 | 0.8013 | 0.85 | 0.8250 | 840 | 0.5476 | 0.3087 | 0.3948 | 149 | 0.6386 | 0.6023 | 0.6199 | 88 | 0.6371 | 0.6469 | 0.6420 | 589 | 0.8235 | 0.8762 | 0.8490 | 751 | 0.9859 | 0.9589 | 0.9722 | 73 | 0.5148 | 0.3937 | 0.4462 | 221 | 0.5116 | 0.4889 | 0.5 | 45 | 0.6245 | 0.7749 | 0.6916 | 191 | 0.7224 | 0.7338 | 0.7280 | 0.9638 | | 0.0884 | 4 | 4244 | 0.1354 | 0.7283 | 0.7386 | 0.7334 | 0.9639 | 0.6785 | 0.6694 | 0.6739 | 372 | 0.1795 | 0.25 | 0.2090 | 28 | 0.8231 | 0.8310 | 0.8270 | 840 | 0.6020 | 0.3960 | 0.4777 | 149 | 0.6092 | 0.6023 | 0.6057 | 88 | 0.6473 | 0.7012 | 0.6732 | 589 | 0.8351 | 0.8628 | 0.8487 | 751 | 1.0 | 0.9726 | 0.9861 | 73 | 0.5899 | 0.4751 | 0.5263 | 221 | 0.4524 | 0.4222 | 0.4368 | 45 | 0.6 | 0.7853 | 0.6803 | 191 | 0.7283 | 0.7386 | 0.7334 | 0.9639 | | 0.0685 | 5 | 5305 | 0.1383 | 0.7224 | 0.7696 | 0.7453 | 0.9644 | 0.6635 | 0.7473 | 0.7029 | 372 | 0.26 | 0.4643 | 0.3333 | 28 | 0.8259 | 0.8357 | 0.8308 | 840 | 0.5913 | 0.4564 | 0.5152 | 149 | 0.6437 | 0.6364 | 0.64 | 88 | 0.6540 | 0.7284 | 0.6892 | 589 | 0.8070 | 0.8961 | 0.8492 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.5693 | 0.5204 | 0.5437 | 221 | 0.5192 | 0.6 | 0.5567 | 45 | 0.6320 | 0.7644 | 0.6919 | 191 | 0.7224 | 0.7696 | 0.7453 | 0.9644 | | 0.0532 | 6 | 6366 | 0.1493 | 0.7099 | 0.7613 | 0.7347 | 0.9631 | 0.6727 | 0.6962 | 0.6843 | 372 | 0.2308 | 0.5357 | 0.3226 | 28 | 0.8242 | 0.8262 | 0.8252 | 840 | 0.5877 | 0.4497 | 0.5095 | 149 | 0.6410 | 0.5682 | 0.6024 | 88 | 0.6232 | 0.7470 | 0.6795 | 589 | 0.8087 | 0.8895 | 0.8472 | 751 | 0.9672 | 0.8082 | 0.8806 | 73 | 0.5107 | 0.5385 | 0.5242 | 221 | 0.6190 | 0.5778 | 0.5977 | 45 | 0.6371 | 0.7906 | 0.7056 | 191 | 0.7099 | 0.7613 | 0.7347 | 0.9631 | | 0.0403 | 7 | 7427 | 0.1592 | 0.7239 | 0.7592 | 0.7411 | 0.9642 | 0.6923 | 0.7016 | 0.6969 | 372 | 0.2857 | 0.5714 | 0.3810 | 28 | 0.8272 | 0.8262 | 0.8267 | 840 | 0.5752 | 0.4362 | 0.4962 | 149 | 0.6265 | 0.5909 | 0.6082 | 88 | 0.6402 | 0.6978 | 0.6677 | 589 | 0.8404 | 0.8762 | 0.8579 | 751 | 0.9859 | 0.9589 | 0.9722 | 73 | 0.5257 | 0.6018 | 0.5612 | 221 | 0.5870 | 0.6 | 0.5934 | 45 | 0.6235 | 0.8063 | 0.7032 | 191 | 0.7239 | 0.7592 | 0.7411 | 0.9642 | | 0.0304 | 8 | 8488 | 0.1738 | 0.7301 | 0.7484 | 0.7392 | 0.9644 | 0.6866 | 0.6774 | 0.6820 | 372 | 0.3409 | 0.5357 | 0.4167 | 28 | 0.8393 | 0.8083 | 0.8235 | 840 | 0.5882 | 0.4698 | 0.5224 | 149 | 0.6429 | 0.6136 | 0.6279 | 88 | 0.6608 | 0.6978 | 0.6788 | 589 | 0.8268 | 0.8708 | 0.8482 | 751 | 0.9595 | 0.9726 | 0.9660 | 73 | 0.5351 | 0.5520 | 0.5434 | 221 | 0.5208 | 0.5556 | 0.5376 | 45 | 0.6204 | 0.7958 | 0.6972 | 191 | 0.7301 | 0.7484 | 0.7392 | 0.9644 | | 0.0234 | 9 | 9549 | 0.1860 | 0.7248 | 0.7625 | 0.7432 | 0.9641 | 0.6947 | 0.7097 | 0.7021 | 372 | 0.2963 | 0.5714 | 0.3902 | 28 | 0.8317 | 0.8298 | 0.8308 | 840 | 0.5913 | 0.4564 | 0.5152 | 149 | 0.6118 | 0.5909 | 0.6012 | 88 | 0.6361 | 0.7063 | 0.6693 | 589 | 0.8410 | 0.8735 | 0.8570 | 751 | 0.9859 | 0.9589 | 0.9722 | 73 | 0.5212 | 0.6109 | 0.5625 | 221 | 0.5417 | 0.5778 | 0.5591 | 45 | 0.6414 | 0.7958 | 0.7103 | 191 | 0.7248 | 0.7625 | 0.7432 | 0.9641 | | 0.0178 | 10 | 10610 | 0.2037 | 0.7434 | 0.7383 | 0.7408 | 0.9640 | 0.7159 | 0.6774 | 0.6961 | 372 | 0.2857 | 0.4286 | 0.3429 | 28 | 0.8333 | 0.8333 | 0.8333 | 840 | 0.6262 | 0.4497 | 0.5234 | 149 | 0.6324 | 0.4886 | 0.5513 | 88 | 0.6568 | 0.6757 | 0.6661 | 589 | 0.8291 | 0.8722 | 0.8501 | 751 | 1.0 | 0.8219 | 0.9023 | 73 | 0.5672 | 0.5158 | 0.5403 | 221 | 0.5 | 0.5333 | 0.5161 | 45 | 0.6952 | 0.7644 | 0.7282 | 191 | 0.7434 | 0.7383 | 0.7408 | 0.9640 | | 0.0147 | 11 | 11671 | 0.2114 | 0.7440 | 0.7233 | 0.7335 | 0.9643 | 0.7009 | 0.6613 | 0.6805 | 372 | 0.3030 | 0.3571 | 0.3279 | 28 | 0.8352 | 0.8024 | 0.8185 | 840 | 0.6238 | 0.4228 | 0.504 | 149 | 0.65 | 0.5909 | 0.6190 | 88 | 0.6436 | 0.6469 | 0.6452 | 589 | 0.8407 | 0.8575 | 0.8490 | 751 | 0.9315 | 0.9315 | 0.9315 | 73 | 0.5812 | 0.5023 | 0.5388 | 221 | 0.5476 | 0.5111 | 0.5287 | 45 | 0.6835 | 0.7801 | 0.7286 | 191 | 0.7440 | 0.7233 | 0.7335 | 0.9643 | | 0.0118 | 12 | 12732 | 0.2218 | 0.7331 | 0.7532 | 0.7430 | 0.9649 | 0.7119 | 0.6909 | 0.7012 | 372 | 0.3488 | 0.5357 | 0.4225 | 28 | 0.8325 | 0.8405 | 0.8365 | 840 | 0.5303 | 0.4698 | 0.4982 | 149 | 0.65 | 0.5909 | 0.6190 | 88 | 0.6690 | 0.6587 | 0.6638 | 589 | 0.8178 | 0.8908 | 0.8528 | 751 | 0.9677 | 0.8219 | 0.8889 | 73 | 0.5408 | 0.5701 | 0.5551 | 221 | 0.5102 | 0.5556 | 0.5319 | 45 | 0.6567 | 0.8010 | 0.7217 | 191 | 0.7331 | 0.7532 | 0.7430 | 0.9649 | | 0.0093 | 13 | 13793 | 0.2283 | 0.7495 | 0.7359 | 0.7427 | 0.9644 | 0.7163 | 0.6989 | 0.7075 | 372 | 0.3810 | 0.5714 | 0.4571 | 28 | 0.8612 | 0.7905 | 0.8243 | 840 | 0.6111 | 0.4430 | 0.5136 | 149 | 0.6145 | 0.5795 | 0.5965 | 88 | 0.6775 | 0.6740 | 0.6757 | 589 | 0.8346 | 0.8802 | 0.8568 | 751 | 0.9710 | 0.9178 | 0.9437 | 73 | 0.5619 | 0.5339 | 0.5476 | 221 | 0.4 | 0.4889 | 0.4400 | 45 | 0.6812 | 0.7382 | 0.7085 | 191 | 0.7495 | 0.7359 | 0.7427 | 0.9644 | | 0.0079 | 14 | 14854 | 0.2383 | 0.7371 | 0.7490 | 0.7430 | 0.9647 | 0.6727 | 0.7016 | 0.6868 | 372 | 0.3261 | 0.5357 | 0.4054 | 28 | 0.8453 | 0.8 | 0.8220 | 840 | 0.5963 | 0.4362 | 0.5039 | 149 | 0.625 | 0.5682 | 0.5952 | 88 | 0.6634 | 0.6927 | 0.6777 | 589 | 0.8433 | 0.8815 | 0.8620 | 751 | 0.9853 | 0.9178 | 0.9504 | 73 | 0.5427 | 0.5747 | 0.5582 | 221 | 0.5814 | 0.5556 | 0.5682 | 45 | 0.6513 | 0.8115 | 0.7226 | 191 | 0.7371 | 0.7490 | 0.7430 | 0.9647 | | 0.0068 | 15 | 15915 | 0.2511 | 0.7255 | 0.7359 | 0.7306 | 0.9639 | 0.6826 | 0.6532 | 0.6676 | 372 | 0.3590 | 0.5 | 0.4179 | 28 | 0.8295 | 0.8167 | 0.8230 | 840 | 0.5263 | 0.4698 | 0.4965 | 149 | 0.6575 | 0.5455 | 0.5963 | 88 | 0.6549 | 0.6604 | 0.6577 | 589 | 0.8242 | 0.8802 | 0.8513 | 751 | 0.9833 | 0.8082 | 0.8872 | 73 | 0.5398 | 0.5520 | 0.5459 | 221 | 0.36 | 0.4 | 0.3789 | 45 | 0.6511 | 0.8010 | 0.7183 | 191 | 0.7255 | 0.7359 | 0.7306 | 0.9639 | | 0.0061 | 16 | 16976 | 0.2497 | 0.7253 | 0.7690 | 0.7465 | 0.9648 | 0.6824 | 0.6989 | 0.6906 | 372 | 0.3333 | 0.5357 | 0.4110 | 28 | 0.8473 | 0.8321 | 0.8396 | 840 | 0.4583 | 0.5168 | 0.4858 | 149 | 0.6494 | 0.5682 | 0.6061 | 88 | 0.6556 | 0.7368 | 0.6938 | 589 | 0.8382 | 0.8828 | 0.8599 | 751 | 0.9841 | 0.8493 | 0.9118 | 73 | 0.5341 | 0.6380 | 0.5814 | 221 | 0.5 | 0.5333 | 0.5161 | 45 | 0.6622 | 0.7801 | 0.7163 | 191 | 0.7253 | 0.7690 | 0.7465 | 0.9648 | | 0.0054 | 17 | 18037 | 0.2554 | 0.7323 | 0.7625 | 0.7471 | 0.9650 | 0.6870 | 0.6962 | 0.6916 | 372 | 0.3421 | 0.4643 | 0.3939 | 28 | 0.8463 | 0.8262 | 0.8361 | 840 | 0.5902 | 0.4832 | 0.5314 | 149 | 0.6753 | 0.5909 | 0.6303 | 88 | 0.6640 | 0.7148 | 0.6885 | 589 | 0.8317 | 0.8948 | 0.8621 | 751 | 0.9437 | 0.9178 | 0.9306 | 73 | 0.5210 | 0.5611 | 0.5403 | 221 | 0.5 | 0.5111 | 0.5055 | 45 | 0.6102 | 0.8115 | 0.6966 | 191 | 0.7323 | 0.7625 | 0.7471 | 0.9650 | | 0.005 | 18 | 19098 | 0.2601 | 0.7273 | 0.7747 | 0.7503 | 0.9654 | 0.6970 | 0.7608 | 0.7275 | 372 | 0.2830 | 0.5357 | 0.3704 | 28 | 0.8320 | 0.8488 | 0.8403 | 840 | 0.5841 | 0.4430 | 0.5038 | 149 | 0.6477 | 0.6477 | 0.6477 | 88 | 0.6378 | 0.6995 | 0.6672 | 589 | 0.8501 | 0.8908 | 0.8700 | 751 | 0.9722 | 0.9589 | 0.9655 | 73 | 0.5323 | 0.5973 | 0.5629 | 221 | 0.4444 | 0.4444 | 0.4444 | 45 | 0.624 | 0.8168 | 0.7075 | 191 | 0.7273 | 0.7747 | 0.7503 | 0.9654 | | 0.0044 | 19 | 20159 | 0.2602 | 0.7369 | 0.7616 | 0.7490 | 0.9656 | 0.7124 | 0.7124 | 0.7124 | 372 | 0.3415 | 0.5 | 0.4058 | 28 | 0.8239 | 0.8631 | 0.8430 | 840 | 0.6355 | 0.4564 | 0.5313 | 149 | 0.6667 | 0.6136 | 0.6391 | 88 | 0.6517 | 0.6638 | 0.6577 | 589 | 0.8405 | 0.8842 | 0.8618 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.5144 | 0.5656 | 0.5388 | 221 | 0.5217 | 0.5333 | 0.5275 | 45 | 0.6550 | 0.7853 | 0.7143 | 191 | 0.7369 | 0.7616 | 0.7490 | 0.9656 | | 0.004 | 20 | 21220 | 0.2677 | 0.7347 | 0.7702 | 0.7520 | 0.9658 | 0.7374 | 0.7097 | 0.7233 | 372 | 0.2857 | 0.4286 | 0.3429 | 28 | 0.8466 | 0.8345 | 0.8405 | 840 | 0.6050 | 0.4832 | 0.5373 | 149 | 0.6667 | 0.6136 | 0.6391 | 88 | 0.6593 | 0.7131 | 0.6852 | 589 | 0.8240 | 0.8975 | 0.8591 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.4981 | 0.5837 | 0.5375 | 221 | 0.5102 | 0.5556 | 0.5319 | 45 | 0.6371 | 0.8272 | 0.7198 | 191 | 0.7347 | 0.7702 | 0.7520 | 0.9658 | | 0.0034 | 21 | 22281 | 0.2743 | 0.7386 | 0.7717 | 0.7548 | 0.9657 | 0.6984 | 0.7097 | 0.704 | 372 | 0.3784 | 0.5 | 0.4308 | 28 | 0.8475 | 0.8333 | 0.8403 | 840 | 0.6333 | 0.5101 | 0.5651 | 149 | 0.6190 | 0.5909 | 0.6047 | 88 | 0.6512 | 0.7385 | 0.6921 | 589 | 0.8428 | 0.8921 | 0.8668 | 751 | 0.9846 | 0.8767 | 0.9275 | 73 | 0.5513 | 0.5837 | 0.5670 | 221 | 0.5106 | 0.5333 | 0.5217 | 45 | 0.6379 | 0.8115 | 0.7143 | 191 | 0.7386 | 0.7717 | 0.7548 | 0.9657 | | 0.0033 | 22 | 23342 | 0.2788 | 0.7418 | 0.7520 | 0.7469 | 0.9652 | 0.7143 | 0.6989 | 0.7065 | 372 | 0.3182 | 0.5 | 0.3889 | 28 | 0.8367 | 0.8298 | 0.8332 | 840 | 0.6168 | 0.4430 | 0.5156 | 149 | 0.6235 | 0.6023 | 0.6127 | 88 | 0.6758 | 0.6689 | 0.6724 | 589 | 0.8327 | 0.8815 | 0.8564 | 751 | 0.9714 | 0.9315 | 0.9510 | 73 | 0.5458 | 0.5928 | 0.5683 | 221 | 0.4783 | 0.4889 | 0.4835 | 45 | 0.6637 | 0.7853 | 0.7194 | 191 | 0.7418 | 0.7520 | 0.7469 | 0.9652 | | 0.0033 | 23 | 24403 | 0.2831 | 0.7342 | 0.7535 | 0.7437 | 0.9650 | 0.6981 | 0.6962 | 0.6972 | 372 | 0.3784 | 0.5 | 0.4308 | 28 | 0.8499 | 0.8024 | 0.8255 | 840 | 0.5034 | 0.4966 | 0.5 | 149 | 0.6067 | 0.6136 | 0.6102 | 88 | 0.6581 | 0.6961 | 0.6766 | 589 | 0.8350 | 0.8961 | 0.8645 | 751 | 0.9714 | 0.9315 | 0.9510 | 73 | 0.5424 | 0.5792 | 0.5602 | 221 | 0.3774 | 0.4444 | 0.4082 | 45 | 0.7048 | 0.7749 | 0.7382 | 191 | 0.7342 | 0.7535 | 0.7437 | 0.9650 | | 0.0029 | 24 | 25464 | 0.2931 | 0.7544 | 0.7380 | 0.7461 | 0.9648 | 0.7365 | 0.6989 | 0.7172 | 372 | 0.3590 | 0.5 | 0.4179 | 28 | 0.8535 | 0.7976 | 0.8246 | 840 | 0.5849 | 0.4161 | 0.4863 | 149 | 0.6622 | 0.5568 | 0.6049 | 88 | 0.6672 | 0.6706 | 0.6689 | 589 | 0.8474 | 0.8802 | 0.8635 | 751 | 0.9701 | 0.8904 | 0.9286 | 73 | 0.5550 | 0.5475 | 0.5513 | 221 | 0.4889 | 0.4889 | 0.4889 | 45 | 0.7023 | 0.7906 | 0.7438 | 191 | 0.7544 | 0.7380 | 0.7461 | 0.9648 | | 0.0028 | 25 | 26525 | 0.2899 | 0.7489 | 0.7574 | 0.7531 | 0.9654 | 0.7021 | 0.7097 | 0.7059 | 372 | 0.3902 | 0.5714 | 0.4638 | 28 | 0.8635 | 0.8131 | 0.8375 | 840 | 0.6182 | 0.4564 | 0.5251 | 149 | 0.6471 | 0.625 | 0.6358 | 88 | 0.6613 | 0.6995 | 0.6799 | 589 | 0.8454 | 0.9028 | 0.8731 | 751 | 0.9583 | 0.9452 | 0.9517 | 73 | 0.5681 | 0.5475 | 0.5576 | 221 | 0.4222 | 0.4222 | 0.4222 | 45 | 0.6608 | 0.7853 | 0.7177 | 191 | 0.7489 | 0.7574 | 0.7531 | 0.9654 | | 0.0023 | 26 | 27586 | 0.2922 | 0.7413 | 0.7532 | 0.7472 | 0.9649 | 0.6897 | 0.6989 | 0.6943 | 372 | 0.35 | 0.5 | 0.4118 | 28 | 0.85 | 0.8298 | 0.8398 | 840 | 0.6161 | 0.4631 | 0.5287 | 149 | 0.6486 | 0.5455 | 0.5926 | 88 | 0.6486 | 0.6927 | 0.6700 | 589 | 0.8457 | 0.8828 | 0.8638 | 751 | 0.9853 | 0.9178 | 0.9504 | 73 | 0.5636 | 0.5611 | 0.5624 | 221 | 0.3958 | 0.4222 | 0.4086 | 45 | 0.6638 | 0.7958 | 0.7238 | 191 | 0.7413 | 0.7532 | 0.7472 | 0.9649 | | 0.0021 | 27 | 28647 | 0.2967 | 0.7514 | 0.7568 | 0.7541 | 0.9656 | 0.7081 | 0.7043 | 0.7062 | 372 | 0.3659 | 0.5357 | 0.4348 | 28 | 0.8547 | 0.8190 | 0.8365 | 840 | 0.5641 | 0.4430 | 0.4962 | 149 | 0.6582 | 0.5909 | 0.6228 | 88 | 0.6677 | 0.7097 | 0.6881 | 589 | 0.8459 | 0.8842 | 0.8646 | 751 | 0.9710 | 0.9178 | 0.9437 | 73 | 0.5806 | 0.5701 | 0.5753 | 221 | 0.4898 | 0.5333 | 0.5106 | 45 | 0.7089 | 0.7906 | 0.7475 | 191 | 0.7514 | 0.7568 | 0.7541 | 0.9656 | | 0.0025 | 28 | 29708 | 0.2957 | 0.7335 | 0.7622 | 0.7475 | 0.9651 | 0.7060 | 0.7231 | 0.7145 | 372 | 0.3077 | 0.4286 | 0.3582 | 28 | 0.8459 | 0.8429 | 0.8444 | 840 | 0.5069 | 0.4899 | 0.4983 | 149 | 0.6438 | 0.5341 | 0.5839 | 88 | 0.6838 | 0.7012 | 0.6924 | 589 | 0.8413 | 0.8895 | 0.8647 | 751 | 0.9552 | 0.8767 | 0.9143 | 73 | 0.4901 | 0.5611 | 0.5232 | 221 | 0.3818 | 0.4667 | 0.42 | 45 | 0.6580 | 0.7958 | 0.7204 | 191 | 0.7335 | 0.7622 | 0.7475 | 0.9651 | | 0.0023 | 29 | 30769 | 0.3049 | 0.7455 | 0.7544 | 0.7499 | 0.9654 | 0.6997 | 0.7392 | 0.7190 | 372 | 0.3182 | 0.5 | 0.3889 | 28 | 0.8483 | 0.8119 | 0.8297 | 840 | 0.5630 | 0.5101 | 0.5352 | 149 | 0.6579 | 0.5682 | 0.6098 | 88 | 0.6791 | 0.7114 | 0.6949 | 589 | 0.8583 | 0.8628 | 0.8606 | 751 | 0.9853 | 0.9178 | 0.9504 | 73 | 0.5234 | 0.5566 | 0.5395 | 221 | 0.4565 | 0.4667 | 0.4615 | 45 | 0.7009 | 0.7853 | 0.7407 | 191 | 0.7455 | 0.7544 | 0.7499 | 0.9654 | | 0.0018 | 30 | 31830 | 0.3042 | 0.7415 | 0.7679 | 0.7544 | 0.9654 | 0.6935 | 0.7419 | 0.7169 | 372 | 0.3333 | 0.5 | 0.4 | 28 | 0.8563 | 0.8226 | 0.8391 | 840 | 0.5878 | 0.5168 | 0.55 | 149 | 0.6582 | 0.5909 | 0.6228 | 88 | 0.6677 | 0.7470 | 0.7051 | 589 | 0.8544 | 0.8828 | 0.8684 | 751 | 0.9710 | 0.9178 | 0.9437 | 73 | 0.5300 | 0.5204 | 0.5251 | 221 | 0.4375 | 0.4667 | 0.4516 | 45 | 0.6417 | 0.8063 | 0.7146 | 191 | 0.7415 | 0.7679 | 0.7544 | 0.9654 | | 0.0017 | 31 | 32891 | 0.3071 | 0.7540 | 0.7481 | 0.7510 | 0.9660 | 0.7083 | 0.7312 | 0.7196 | 372 | 0.4054 | 0.5357 | 0.4615 | 28 | 0.8552 | 0.8226 | 0.8386 | 840 | 0.6311 | 0.4362 | 0.5159 | 149 | 0.6220 | 0.5795 | 0.6 | 88 | 0.6734 | 0.6757 | 0.6746 | 589 | 0.8626 | 0.8775 | 0.8700 | 751 | 0.9855 | 0.9315 | 0.9577 | 73 | 0.5307 | 0.5475 | 0.5390 | 221 | 0.3830 | 0.4 | 0.3913 | 45 | 0.7019 | 0.7644 | 0.7318 | 191 | 0.7540 | 0.7481 | 0.7510 | 0.9660 | | 0.0018 | 32 | 33952 | 0.3190 | 0.7499 | 0.7553 | 0.7526 | 0.9656 | 0.7182 | 0.7124 | 0.7152 | 372 | 0.3333 | 0.5357 | 0.4110 | 28 | 0.8586 | 0.7952 | 0.8257 | 840 | 0.6116 | 0.4966 | 0.5481 | 149 | 0.6463 | 0.6023 | 0.6235 | 88 | 0.6805 | 0.6978 | 0.6890 | 589 | 0.8360 | 0.8895 | 0.8619 | 751 | 0.9855 | 0.9315 | 0.9577 | 73 | 0.5633 | 0.5837 | 0.5733 | 221 | 0.5106 | 0.5333 | 0.5217 | 45 | 0.6711 | 0.8010 | 0.7303 | 191 | 0.7499 | 0.7553 | 0.7526 | 0.9656 | | 0.0018 | 33 | 35013 | 0.3094 | 0.7460 | 0.7774 | 0.7614 | 0.9665 | 0.7147 | 0.7473 | 0.7306 | 372 | 0.3659 | 0.5357 | 0.4348 | 28 | 0.8556 | 0.8393 | 0.8474 | 840 | 0.6273 | 0.4631 | 0.5328 | 149 | 0.6506 | 0.6136 | 0.6316 | 88 | 0.6787 | 0.7351 | 0.7058 | 589 | 0.8344 | 0.8988 | 0.8654 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.5702 | 0.6063 | 0.5877 | 221 | 0.3036 | 0.3778 | 0.3366 | 45 | 0.6567 | 0.8010 | 0.7217 | 191 | 0.7460 | 0.7774 | 0.7614 | 0.9665 | | 0.0015 | 34 | 36074 | 0.3091 | 0.7441 | 0.7759 | 0.7597 | 0.9665 | 0.7113 | 0.7285 | 0.7198 | 372 | 0.3404 | 0.5714 | 0.4267 | 28 | 0.8266 | 0.8512 | 0.8387 | 840 | 0.5405 | 0.5369 | 0.5387 | 149 | 0.6707 | 0.625 | 0.6471 | 88 | 0.6856 | 0.7182 | 0.7015 | 589 | 0.8517 | 0.8868 | 0.8689 | 751 | 1.0 | 0.9452 | 0.9718 | 73 | 0.5752 | 0.5882 | 0.5817 | 221 | 0.3878 | 0.4222 | 0.4043 | 45 | 0.6830 | 0.8010 | 0.7373 | 191 | 0.7441 | 0.7759 | 0.7597 | 0.9665 | | 0.0015 | 35 | 37135 | 0.3185 | 0.7487 | 0.7619 | 0.7552 | 0.9660 | 0.6982 | 0.7339 | 0.7156 | 372 | 0.3415 | 0.5 | 0.4058 | 28 | 0.8685 | 0.8179 | 0.8424 | 840 | 0.5504 | 0.4765 | 0.5108 | 149 | 0.6353 | 0.6136 | 0.6243 | 88 | 0.6636 | 0.7267 | 0.6937 | 589 | 0.8654 | 0.8815 | 0.8734 | 751 | 1.0 | 0.9315 | 0.9645 | 73 | 0.55 | 0.5475 | 0.5488 | 221 | 0.3673 | 0.4 | 0.3830 | 45 | 0.6937 | 0.8063 | 0.7458 | 191 | 0.7487 | 0.7619 | 0.7552 | 0.9660 | | 0.0015 | 36 | 38196 | 0.3203 | 0.7438 | 0.7649 | 0.7542 | 0.9660 | 0.6961 | 0.7204 | 0.7081 | 372 | 0.3659 | 0.5357 | 0.4348 | 28 | 0.8617 | 0.8381 | 0.8497 | 840 | 0.5203 | 0.5168 | 0.5185 | 149 | 0.6667 | 0.5909 | 0.6265 | 88 | 0.6710 | 0.7063 | 0.6882 | 589 | 0.8495 | 0.8868 | 0.8678 | 751 | 0.9710 | 0.9178 | 0.9437 | 73 | 0.5561 | 0.5385 | 0.5471 | 221 | 0.42 | 0.4667 | 0.4421 | 45 | 0.6568 | 0.8115 | 0.7260 | 191 | 0.7438 | 0.7649 | 0.7542 | 0.9660 | | 0.0013 | 37 | 39257 | 0.3298 | 0.7315 | 0.7732 | 0.7518 | 0.9656 | 0.6915 | 0.7231 | 0.7070 | 372 | 0.3333 | 0.5714 | 0.4211 | 28 | 0.8654 | 0.8190 | 0.8416 | 840 | 0.4793 | 0.5436 | 0.5094 | 149 | 0.6582 | 0.5909 | 0.6228 | 88 | 0.6656 | 0.7267 | 0.6948 | 589 | 0.8289 | 0.9028 | 0.8642 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.5574 | 0.5928 | 0.5746 | 221 | 0.4043 | 0.4222 | 0.4130 | 45 | 0.6408 | 0.8220 | 0.7202 | 191 | 0.7315 | 0.7732 | 0.7518 | 0.9656 | | 0.0012 | 38 | 40318 | 0.3311 | 0.7533 | 0.7610 | 0.7571 | 0.9664 | 0.7060 | 0.7231 | 0.7145 | 372 | 0.3571 | 0.5357 | 0.4286 | 28 | 0.8613 | 0.8357 | 0.8483 | 840 | 0.6339 | 0.4765 | 0.5441 | 149 | 0.6543 | 0.6023 | 0.6272 | 88 | 0.6528 | 0.7182 | 0.6839 | 589 | 0.8424 | 0.8828 | 0.8622 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.6031 | 0.5294 | 0.5639 | 221 | 0.4130 | 0.4222 | 0.4176 | 45 | 0.7122 | 0.7644 | 0.7374 | 191 | 0.7533 | 0.7610 | 0.7571 | 0.9664 | | 0.0012 | 39 | 41379 | 0.3328 | 0.7444 | 0.7553 | 0.7498 | 0.9657 | 0.6818 | 0.7258 | 0.7031 | 372 | 0.3478 | 0.5714 | 0.4324 | 28 | 0.8561 | 0.8143 | 0.8347 | 840 | 0.6055 | 0.4430 | 0.5116 | 149 | 0.6582 | 0.5909 | 0.6228 | 88 | 0.6715 | 0.7046 | 0.6877 | 589 | 0.8461 | 0.8708 | 0.8583 | 751 | 0.9706 | 0.9041 | 0.9362 | 73 | 0.5665 | 0.5973 | 0.5815 | 221 | 0.4082 | 0.4444 | 0.4255 | 45 | 0.6770 | 0.8010 | 0.7338 | 191 | 0.7444 | 0.7553 | 0.7498 | 0.9657 | | 0.0014 | 40 | 42440 | 0.3415 | 0.7421 | 0.7437 | 0.7429 | 0.9641 | 0.6931 | 0.7043 | 0.6987 | 372 | 0.3488 | 0.5357 | 0.4225 | 28 | 0.8422 | 0.8262 | 0.8341 | 840 | 0.6190 | 0.4362 | 0.5118 | 149 | 0.6622 | 0.5568 | 0.6049 | 88 | 0.6888 | 0.6350 | 0.6608 | 589 | 0.8175 | 0.8828 | 0.8489 | 751 | 1.0 | 0.9178 | 0.9571 | 73 | 0.5584 | 0.5837 | 0.5708 | 221 | 0.4043 | 0.4222 | 0.4130 | 45 | 0.6580 | 0.7958 | 0.7204 | 191 | 0.7421 | 0.7437 | 0.7429 | 0.9641 | | 0.0013 | 41 | 43501 | 0.3401 | 0.7501 | 0.7487 | 0.7494 | 0.9651 | 0.6915 | 0.7231 | 0.7070 | 372 | 0.3421 | 0.4643 | 0.3939 | 28 | 0.8545 | 0.8179 | 0.8358 | 840 | 0.6346 | 0.4430 | 0.5217 | 149 | 0.6812 | 0.5341 | 0.5987 | 88 | 0.6728 | 0.6808 | 0.6768 | 589 | 0.8380 | 0.8748 | 0.8560 | 751 | 0.9710 | 0.9178 | 0.9437 | 73 | 0.5860 | 0.5701 | 0.5780 | 221 | 0.4423 | 0.5111 | 0.4742 | 45 | 0.6787 | 0.7853 | 0.7282 | 191 | 0.7501 | 0.7487 | 0.7494 | 0.9651 | | 0.0011 | 42 | 44562 | 0.3468 | 0.7426 | 0.7687 | 0.7554 | 0.9650 | 0.6965 | 0.7527 | 0.7235 | 372 | 0.3488 | 0.5357 | 0.4225 | 28 | 0.8667 | 0.8202 | 0.8428 | 840 | 0.6408 | 0.4430 | 0.5238 | 149 | 0.6709 | 0.6023 | 0.6347 | 88 | 0.6902 | 0.7148 | 0.7023 | 589 | 0.8404 | 0.8975 | 0.8680 | 751 | 0.9444 | 0.9315 | 0.9379 | 73 | 0.5191 | 0.6154 | 0.5631 | 221 | 0.3469 | 0.3778 | 0.3617 | 45 | 0.6210 | 0.8063 | 0.7016 | 191 | 0.7426 | 0.7687 | 0.7554 | 0.9650 | | 0.0015 | 43 | 45623 | 0.3440 | 0.7566 | 0.7422 | 0.7493 | 0.9648 | 0.6937 | 0.7366 | 0.7145 | 372 | 0.3846 | 0.5357 | 0.4478 | 28 | 0.8608 | 0.8095 | 0.8344 | 840 | 0.6082 | 0.3960 | 0.4797 | 149 | 0.7 | 0.5568 | 0.6203 | 88 | 0.6766 | 0.6570 | 0.6667 | 589 | 0.8317 | 0.8881 | 0.8590 | 751 | 0.9701 | 0.8904 | 0.9286 | 73 | 0.6224 | 0.5520 | 0.5851 | 221 | 0.3913 | 0.4 | 0.3956 | 45 | 0.7081 | 0.7749 | 0.74 | 191 | 0.7566 | 0.7422 | 0.7493 | 0.9648 | | 0.0011 | 44 | 46684 | 0.3354 | 0.7565 | 0.7640 | 0.7602 | 0.9664 | 0.7062 | 0.7366 | 0.7211 | 372 | 0.3659 | 0.5357 | 0.4348 | 28 | 0.8483 | 0.8452 | 0.8468 | 840 | 0.6095 | 0.4295 | 0.5039 | 149 | 0.6883 | 0.6023 | 0.6424 | 88 | 0.6880 | 0.6740 | 0.6810 | 589 | 0.8517 | 0.8948 | 0.8727 | 751 | 0.9710 | 0.9178 | 0.9437 | 73 | 0.6238 | 0.5928 | 0.6079 | 221 | 0.3830 | 0.4 | 0.3913 | 45 | 0.65 | 0.8168 | 0.7239 | 191 | 0.7565 | 0.7640 | 0.7602 | 0.9664 | | 0.0011 | 45 | 47745 | 0.3347 | 0.7485 | 0.7622 | 0.7553 | 0.9655 | 0.7088 | 0.7392 | 0.7237 | 372 | 0.3636 | 0.5714 | 0.4444 | 28 | 0.8603 | 0.8286 | 0.8441 | 840 | 0.5882 | 0.4698 | 0.5224 | 149 | 0.6023 | 0.6023 | 0.6023 | 88 | 0.6770 | 0.6689 | 0.6729 | 589 | 0.8417 | 0.8921 | 0.8662 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.6037 | 0.5928 | 0.5982 | 221 | 0.4583 | 0.4889 | 0.4731 | 45 | 0.6275 | 0.8115 | 0.7078 | 191 | 0.7485 | 0.7622 | 0.7553 | 0.9655 | | 0.0011 | 46 | 48806 | 0.3421 | 0.7481 | 0.7640 | 0.7559 | 0.9657 | 0.7261 | 0.7339 | 0.7299 | 372 | 0.3171 | 0.4643 | 0.3768 | 28 | 0.8570 | 0.8202 | 0.8382 | 840 | 0.5691 | 0.4698 | 0.5147 | 149 | 0.6429 | 0.6136 | 0.6279 | 88 | 0.6769 | 0.7114 | 0.6937 | 589 | 0.8311 | 0.8908 | 0.8599 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.5714 | 0.5611 | 0.5662 | 221 | 0.5 | 0.5556 | 0.5263 | 45 | 0.6638 | 0.7958 | 0.7238 | 191 | 0.7481 | 0.7640 | 0.7559 | 0.9657 | | 0.0009 | 47 | 49867 | 0.3487 | 0.7496 | 0.7604 | 0.7550 | 0.9656 | 0.7158 | 0.7043 | 0.7100 | 372 | 0.3409 | 0.5357 | 0.4167 | 28 | 0.86 | 0.8190 | 0.8390 | 840 | 0.5496 | 0.4832 | 0.5143 | 149 | 0.7162 | 0.6023 | 0.6543 | 88 | 0.6745 | 0.7284 | 0.7004 | 589 | 0.8346 | 0.8802 | 0.8568 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.5566 | 0.5339 | 0.5450 | 221 | 0.5349 | 0.5111 | 0.5227 | 45 | 0.6828 | 0.8115 | 0.7416 | 191 | 0.7496 | 0.7604 | 0.7550 | 0.9656 | | 0.0009 | 48 | 50928 | 0.3470 | 0.7414 | 0.7649 | 0.7529 | 0.9651 | 0.7092 | 0.7473 | 0.7277 | 372 | 0.3333 | 0.5357 | 0.4110 | 28 | 0.8541 | 0.8226 | 0.8381 | 840 | 0.5847 | 0.4631 | 0.5169 | 149 | 0.6835 | 0.6136 | 0.6467 | 88 | 0.6801 | 0.7148 | 0.6970 | 589 | 0.8319 | 0.8895 | 0.8597 | 751 | 0.9571 | 0.9178 | 0.9371 | 73 | 0.5307 | 0.5475 | 0.5390 | 221 | 0.4583 | 0.4889 | 0.4731 | 45 | 0.6364 | 0.8063 | 0.7113 | 191 | 0.7414 | 0.7649 | 0.7529 | 0.9651 | | 0.0011 | 49 | 51989 | 0.3389 | 0.7435 | 0.7664 | 0.7547 | 0.9659 | 0.6957 | 0.7312 | 0.7130 | 372 | 0.3590 | 0.5 | 0.4179 | 28 | 0.8561 | 0.8286 | 0.8421 | 840 | 0.6636 | 0.4899 | 0.5637 | 149 | 0.6136 | 0.6136 | 0.6136 | 88 | 0.6732 | 0.6995 | 0.6861 | 589 | 0.8251 | 0.8921 | 0.8573 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.5746 | 0.5928 | 0.5835 | 221 | 0.4348 | 0.4444 | 0.4396 | 45 | 0.6390 | 0.8063 | 0.7130 | 191 | 0.7435 | 0.7664 | 0.7547 | 0.9659 | | 0.0009 | 50 | 53050 | 0.3557 | 0.7490 | 0.7640 | 0.7564 | 0.9659 | 0.6948 | 0.6855 | 0.6901 | 372 | 0.3947 | 0.5357 | 0.4545 | 28 | 0.8584 | 0.8298 | 0.8438 | 840 | 0.6455 | 0.4765 | 0.5483 | 149 | 0.6933 | 0.5909 | 0.6380 | 88 | 0.6745 | 0.7317 | 0.7020 | 589 | 0.8296 | 0.8948 | 0.8610 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.6082 | 0.5339 | 0.5687 | 221 | 0.4043 | 0.4222 | 0.4130 | 45 | 0.6270 | 0.8272 | 0.7133 | 191 | 0.7490 | 0.7640 | 0.7564 | 0.9659 | | 0.0008 | 51 | 54111 | 0.3492 | 0.7516 | 0.7601 | 0.7558 | 0.9662 | 0.7104 | 0.6989 | 0.7046 | 372 | 0.3714 | 0.4643 | 0.4127 | 28 | 0.8545 | 0.8321 | 0.8432 | 840 | 0.6496 | 0.5101 | 0.5714 | 149 | 0.625 | 0.5682 | 0.5952 | 88 | 0.6722 | 0.6893 | 0.6806 | 589 | 0.8413 | 0.8895 | 0.8647 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.5611 | 0.5611 | 0.5611 | 221 | 0.4792 | 0.5111 | 0.4946 | 45 | 0.6724 | 0.8168 | 0.7376 | 191 | 0.7516 | 0.7601 | 0.7558 | 0.9662 | | 0.0008 | 52 | 55172 | 0.3432 | 0.7526 | 0.7625 | 0.7575 | 0.9661 | 0.7044 | 0.7366 | 0.7201 | 372 | 0.3571 | 0.5357 | 0.4286 | 28 | 0.8610 | 0.8262 | 0.8433 | 840 | 0.6140 | 0.4698 | 0.5323 | 149 | 0.6667 | 0.5909 | 0.6265 | 88 | 0.6766 | 0.6927 | 0.6846 | 589 | 0.8403 | 0.8895 | 0.8642 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.5849 | 0.5611 | 0.5727 | 221 | 0.46 | 0.5111 | 0.4842 | 45 | 0.6681 | 0.8115 | 0.7329 | 191 | 0.7526 | 0.7625 | 0.7575 | 0.9661 | | **0.0006** | **53** | **56233** | **0.3565** | **0.7615** | **0.7747** | **0.7681** | **0.9672** | **0.7305** | **0.7285** | **0.7295** | **372** | **0.3721** | **0.5714** | **0.4507** | **28** | **0.8679** | **0.8369** | **0.8521** | **840** | **0.6545** | **0.4832** | **0.5560** | **149** | **0.6625** | **0.6023** | **0.6310** | **88** | **0.6761** | **0.7267** | **0.7005** | **589** | **0.8255** | **0.9068** | **0.8642** | **751** | **1.0** | **0.9589** | **0.9790** | **73** | **0.6030** | **0.5430** | **0.5714** | **221** | **0.5682** | **0.5556** | **0.5618** | **45** | **0.7** | **0.8063** | **0.7494** | **191** | **0.7615** | **0.7747** | **0.7681** | **0.9672** | | 0.0008 | 54 | 57294 | 0.3480 | 0.7590 | 0.7631 | 0.7610 | 0.9668 | 0.7452 | 0.7312 | 0.7381 | 372 | 0.3409 | 0.5357 | 0.4167 | 28 | 0.8589 | 0.8190 | 0.8385 | 840 | 0.5935 | 0.4899 | 0.5368 | 149 | 0.7027 | 0.5909 | 0.6420 | 88 | 0.6924 | 0.6842 | 0.6883 | 589 | 0.8432 | 0.8948 | 0.8682 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.5856 | 0.5882 | 0.5869 | 221 | 0.5102 | 0.5556 | 0.5319 | 45 | 0.6513 | 0.8115 | 0.7226 | 191 | 0.7590 | 0.7631 | 0.7610 | 0.9668 | | 0.0008 | 55 | 58355 | 0.3568 | 0.7601 | 0.7622 | 0.7612 | 0.9663 | 0.7228 | 0.7151 | 0.7189 | 372 | 0.3571 | 0.5357 | 0.4286 | 28 | 0.8429 | 0.8429 | 0.8429 | 840 | 0.6634 | 0.4497 | 0.536 | 149 | 0.7 | 0.5568 | 0.6203 | 88 | 0.6828 | 0.7165 | 0.6993 | 589 | 0.8655 | 0.8828 | 0.8741 | 751 | 0.9853 | 0.9178 | 0.9504 | 73 | 0.5909 | 0.5294 | 0.5585 | 221 | 0.5106 | 0.5333 | 0.5217 | 45 | 0.6429 | 0.8010 | 0.7133 | 191 | 0.7601 | 0.7622 | 0.7612 | 0.9663 | | 0.0009 | 56 | 59416 | 0.3498 | 0.7542 | 0.7580 | 0.7561 | 0.9661 | 0.7178 | 0.7043 | 0.7110 | 372 | 0.3409 | 0.5357 | 0.4167 | 28 | 0.8379 | 0.8429 | 0.8404 | 840 | 0.6634 | 0.4497 | 0.536 | 149 | 0.6322 | 0.625 | 0.6286 | 88 | 0.6895 | 0.6825 | 0.6860 | 589 | 0.8513 | 0.8842 | 0.8674 | 751 | 0.9577 | 0.9315 | 0.9444 | 73 | 0.5613 | 0.5385 | 0.5497 | 221 | 0.5111 | 0.5111 | 0.5111 | 45 | 0.6667 | 0.8063 | 0.7299 | 191 | 0.7542 | 0.7580 | 0.7561 | 0.9661 | | 0.0007 | 57 | 60477 | 0.3486 | 0.7479 | 0.7711 | 0.7593 | 0.9663 | 0.7143 | 0.7392 | 0.7266 | 372 | 0.3571 | 0.5357 | 0.4286 | 28 | 0.8417 | 0.8417 | 0.8417 | 840 | 0.5923 | 0.5168 | 0.5520 | 149 | 0.6667 | 0.6136 | 0.6391 | 88 | 0.6720 | 0.7165 | 0.6935 | 589 | 0.8562 | 0.8802 | 0.8680 | 751 | 0.9714 | 0.9315 | 0.9510 | 73 | 0.5670 | 0.5747 | 0.5708 | 221 | 0.4583 | 0.4889 | 0.4731 | 45 | 0.6623 | 0.8010 | 0.7251 | 191 | 0.7479 | 0.7711 | 0.7593 | 0.9663 | | 0.0007 | 58 | 61538 | 0.3497 | 0.7539 | 0.7744 | 0.7640 | 0.9667 | 0.7143 | 0.7392 | 0.7266 | 372 | 0.3659 | 0.5357 | 0.4348 | 28 | 0.8449 | 0.8429 | 0.8439 | 840 | 0.6429 | 0.4832 | 0.5517 | 149 | 0.6667 | 0.5909 | 0.6265 | 88 | 0.6708 | 0.7267 | 0.6976 | 589 | 0.8499 | 0.8975 | 0.8731 | 751 | 0.9714 | 0.9315 | 0.9510 | 73 | 0.6108 | 0.5611 | 0.5849 | 221 | 0.5 | 0.4889 | 0.4944 | 45 | 0.6525 | 0.8063 | 0.7213 | 191 | 0.7539 | 0.7744 | 0.7640 | 0.9667 | | 0.0008 | 59 | 62599 | 0.3581 | 0.7474 | 0.7762 | 0.7615 | 0.9662 | 0.7183 | 0.7473 | 0.7325 | 372 | 0.3409 | 0.5357 | 0.4167 | 28 | 0.8439 | 0.8429 | 0.8434 | 840 | 0.5467 | 0.5503 | 0.5485 | 149 | 0.6709 | 0.6023 | 0.6347 | 88 | 0.6693 | 0.7250 | 0.6960 | 589 | 0.8454 | 0.8881 | 0.8662 | 751 | 0.9714 | 0.9315 | 0.9510 | 73 | 0.5961 | 0.5475 | 0.5708 | 221 | 0.5 | 0.5333 | 0.5161 | 45 | 0.6769 | 0.8115 | 0.7381 | 191 | 0.7474 | 0.7762 | 0.7615 | 0.9662 | | 0.0007 | 60 | 63660 | 0.3636 | 0.7494 | 0.7676 | 0.7584 | 0.9662 | 0.7016 | 0.7204 | 0.7109 | 372 | 0.3488 | 0.5357 | 0.4225 | 28 | 0.8489 | 0.8357 | 0.8422 | 840 | 0.6 | 0.4832 | 0.5353 | 149 | 0.6538 | 0.5795 | 0.6145 | 88 | 0.6828 | 0.7199 | 0.7008 | 589 | 0.8476 | 0.8815 | 0.8642 | 751 | 0.9714 | 0.9315 | 0.9510 | 73 | 0.5579 | 0.5882 | 0.5727 | 221 | 0.4762 | 0.4444 | 0.4598 | 45 | 0.6797 | 0.8220 | 0.7441 | 191 | 0.7494 | 0.7676 | 0.7584 | 0.9662 | | 0.0008 | 61 | 64721 | 0.3646 | 0.7538 | 0.7574 | 0.7556 | 0.9660 | 0.6854 | 0.7204 | 0.7025 | 372 | 0.3659 | 0.5357 | 0.4348 | 28 | 0.8573 | 0.8369 | 0.8470 | 840 | 0.6306 | 0.4698 | 0.5385 | 149 | 0.6667 | 0.5909 | 0.6265 | 88 | 0.6896 | 0.6978 | 0.6937 | 589 | 0.8495 | 0.8722 | 0.8607 | 751 | 0.9714 | 0.9315 | 0.9510 | 73 | 0.5728 | 0.5520 | 0.5622 | 221 | 0.375 | 0.4 | 0.3871 | 45 | 0.6830 | 0.8010 | 0.7373 | 191 | 0.7538 | 0.7574 | 0.7556 | 0.9660 | | 0.0006 | 62 | 65782 | 0.3697 | 0.7510 | 0.7460 | 0.7485 | 0.9651 | 0.6885 | 0.7070 | 0.6976 | 372 | 0.4286 | 0.5357 | 0.4762 | 28 | 0.8663 | 0.7869 | 0.8247 | 840 | 0.5902 | 0.4832 | 0.5314 | 149 | 0.6757 | 0.5682 | 0.6173 | 88 | 0.6667 | 0.6927 | 0.6794 | 589 | 0.8432 | 0.8948 | 0.8682 | 751 | 0.9851 | 0.9041 | 0.9429 | 73 | 0.5829 | 0.5566 | 0.5694 | 221 | 0.3673 | 0.4 | 0.3830 | 45 | 0.6995 | 0.7801 | 0.7376 | 191 | 0.7510 | 0.7460 | 0.7485 | 0.9651 | | 0.0006 | 63 | 66843 | 0.3661 | 0.7504 | 0.7502 | 0.7503 | 0.9655 | 0.6909 | 0.6909 | 0.6909 | 372 | 0.4286 | 0.5357 | 0.4762 | 28 | 0.8571 | 0.8143 | 0.8352 | 840 | 0.5814 | 0.5034 | 0.5396 | 149 | 0.6582 | 0.5909 | 0.6228 | 88 | 0.7013 | 0.6655 | 0.6829 | 589 | 0.8348 | 0.8948 | 0.8638 | 751 | 0.9571 | 0.9178 | 0.9371 | 73 | 0.5570 | 0.5747 | 0.5657 | 221 | 0.3830 | 0.4 | 0.3913 | 45 | 0.6786 | 0.7958 | 0.7325 | 191 | 0.7504 | 0.7502 | 0.7503 | 0.9655 | | 0.0006 | 64 | 67904 | 0.3711 | 0.7404 | 0.7628 | 0.7514 | 0.9656 | 0.6911 | 0.7097 | 0.7003 | 372 | 0.3784 | 0.5 | 0.4308 | 28 | 0.8455 | 0.8405 | 0.8430 | 840 | 0.6 | 0.5034 | 0.5474 | 149 | 0.65 | 0.5909 | 0.6190 | 88 | 0.6667 | 0.7029 | 0.6843 | 589 | 0.8350 | 0.8961 | 0.8645 | 751 | 0.9714 | 0.9315 | 0.9510 | 73 | 0.5673 | 0.5339 | 0.5501 | 221 | 0.2917 | 0.3111 | 0.3011 | 45 | 0.6568 | 0.8115 | 0.7260 | 191 | 0.7404 | 0.7628 | 0.7514 | 0.9656 | | 0.0007 | 65 | 68965 | 0.3672 | 0.7377 | 0.7696 | 0.7533 | 0.9661 | 0.7005 | 0.7419 | 0.7206 | 372 | 0.3333 | 0.5357 | 0.4110 | 28 | 0.8433 | 0.8393 | 0.8413 | 840 | 0.5839 | 0.5369 | 0.5594 | 149 | 0.6506 | 0.6136 | 0.6316 | 88 | 0.6840 | 0.7131 | 0.6983 | 589 | 0.8412 | 0.8815 | 0.8609 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.5427 | 0.5747 | 0.5582 | 221 | 0.3019 | 0.3556 | 0.3265 | 45 | 0.6360 | 0.7958 | 0.7070 | 191 | 0.7377 | 0.7696 | 0.7533 | 0.9661 | | 0.0005 | 66 | 70026 | 0.3768 | 0.7496 | 0.7520 | 0.7508 | 0.9657 | 0.6903 | 0.7070 | 0.6985 | 372 | 0.3415 | 0.5 | 0.4058 | 28 | 0.8454 | 0.8333 | 0.8393 | 840 | 0.6372 | 0.4832 | 0.5496 | 149 | 0.6795 | 0.6023 | 0.6386 | 88 | 0.6914 | 0.6655 | 0.6782 | 589 | 0.8483 | 0.8788 | 0.8633 | 751 | 0.9577 | 0.9315 | 0.9444 | 73 | 0.5714 | 0.5792 | 0.5753 | 221 | 0.3 | 0.3333 | 0.3158 | 45 | 0.6696 | 0.7958 | 0.7273 | 191 | 0.7496 | 0.7520 | 0.7508 | 0.9657 | | 0.0007 | 67 | 71087 | 0.3682 | 0.7461 | 0.7664 | 0.7561 | 0.9656 | 0.7094 | 0.7285 | 0.7188 | 372 | 0.3409 | 0.5357 | 0.4167 | 28 | 0.8563 | 0.8369 | 0.8465 | 840 | 0.6290 | 0.5235 | 0.5714 | 149 | 0.6974 | 0.6023 | 0.6463 | 88 | 0.6935 | 0.6876 | 0.6905 | 589 | 0.8363 | 0.8842 | 0.8595 | 751 | 0.9437 | 0.9178 | 0.9306 | 73 | 0.5175 | 0.6018 | 0.5565 | 221 | 0.4694 | 0.5111 | 0.4894 | 45 | 0.6483 | 0.8010 | 0.7166 | 191 | 0.7461 | 0.7664 | 0.7561 | 0.9656 | | 0.0005 | 68 | 72148 | 0.3815 | 0.7590 | 0.7416 | 0.7502 | 0.9654 | 0.7092 | 0.7016 | 0.7054 | 372 | 0.4054 | 0.5357 | 0.4615 | 28 | 0.8489 | 0.8095 | 0.8288 | 840 | 0.6796 | 0.4698 | 0.5556 | 149 | 0.6456 | 0.5795 | 0.6108 | 88 | 0.6801 | 0.6570 | 0.6684 | 589 | 0.8476 | 0.8815 | 0.8642 | 751 | 0.9571 | 0.9178 | 0.9371 | 73 | 0.615 | 0.5566 | 0.5843 | 221 | 0.4348 | 0.4444 | 0.4396 | 45 | 0.6759 | 0.7644 | 0.7174 | 191 | 0.7590 | 0.7416 | 0.7502 | 0.9654 | | 0.0006 | 69 | 73209 | 0.3919 | 0.7494 | 0.7487 | 0.7491 | 0.9650 | 0.6888 | 0.6962 | 0.6925 | 372 | 0.3590 | 0.5 | 0.4179 | 28 | 0.8416 | 0.8095 | 0.8252 | 840 | 0.5865 | 0.5235 | 0.5532 | 149 | 0.6901 | 0.5568 | 0.6164 | 88 | 0.6950 | 0.6808 | 0.6878 | 589 | 0.8490 | 0.8908 | 0.8694 | 751 | 1.0 | 0.9041 | 0.9496 | 73 | 0.5662 | 0.5611 | 0.5636 | 221 | 0.3265 | 0.3556 | 0.3404 | 45 | 0.6881 | 0.7853 | 0.7335 | 191 | 0.7494 | 0.7487 | 0.7491 | 0.9650 | | 0.0006 | 70 | 74270 | 0.3704 | 0.7587 | 0.7619 | 0.7603 | 0.9666 | 0.6891 | 0.7151 | 0.7018 | 372 | 0.3947 | 0.5357 | 0.4545 | 28 | 0.8376 | 0.8536 | 0.8455 | 840 | 0.6697 | 0.4899 | 0.5659 | 149 | 0.6420 | 0.5909 | 0.6154 | 88 | 0.7018 | 0.6791 | 0.6903 | 589 | 0.8491 | 0.8842 | 0.8663 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.6219 | 0.5656 | 0.5924 | 221 | 0.3913 | 0.4 | 0.3956 | 45 | 0.6802 | 0.7906 | 0.7312 | 191 | 0.7587 | 0.7619 | 0.7603 | 0.9666 | | 0.0005 | 71 | 75331 | 0.3841 | 0.7501 | 0.7634 | 0.7567 | 0.9659 | 0.7005 | 0.6855 | 0.6929 | 372 | 0.4054 | 0.5357 | 0.4615 | 28 | 0.8531 | 0.8298 | 0.8413 | 840 | 0.6293 | 0.4899 | 0.5509 | 149 | 0.6410 | 0.5682 | 0.6024 | 88 | 0.6774 | 0.7165 | 0.6964 | 589 | 0.8264 | 0.9001 | 0.8617 | 751 | 0.9706 | 0.9041 | 0.9362 | 73 | 0.5882 | 0.5882 | 0.5882 | 221 | 0.4545 | 0.4444 | 0.4494 | 45 | 0.6864 | 0.7906 | 0.7348 | 191 | 0.7501 | 0.7634 | 0.7567 | 0.9659 | | 0.0005 | 72 | 76392 | 0.3830 | 0.7605 | 0.7496 | 0.7550 | 0.9655 | 0.7036 | 0.6828 | 0.6930 | 372 | 0.3824 | 0.4643 | 0.4194 | 28 | 0.8618 | 0.8238 | 0.8424 | 840 | 0.6542 | 0.4698 | 0.5469 | 149 | 0.6582 | 0.5909 | 0.6228 | 88 | 0.6935 | 0.6723 | 0.6828 | 589 | 0.8476 | 0.8815 | 0.8642 | 751 | 0.9577 | 0.9315 | 0.9444 | 73 | 0.5830 | 0.5882 | 0.5856 | 221 | 0.4043 | 0.4222 | 0.4130 | 45 | 0.6892 | 0.8010 | 0.7409 | 191 | 0.7605 | 0.7496 | 0.7550 | 0.9655 | | 0.0006 | 73 | 77453 | 0.3839 | 0.7611 | 0.7547 | 0.7579 | 0.9661 | 0.712 | 0.7177 | 0.7149 | 372 | 0.3429 | 0.4286 | 0.3810 | 28 | 0.8494 | 0.8393 | 0.8443 | 840 | 0.6542 | 0.4698 | 0.5469 | 149 | 0.6538 | 0.5795 | 0.6145 | 88 | 0.6877 | 0.6655 | 0.6764 | 589 | 0.8428 | 0.8921 | 0.8668 | 751 | 0.9710 | 0.9178 | 0.9437 | 73 | 0.6257 | 0.5294 | 0.5735 | 221 | 0.4468 | 0.4667 | 0.4565 | 45 | 0.6814 | 0.8063 | 0.7386 | 191 | 0.7611 | 0.7547 | 0.7579 | 0.9661 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2+cu113 - Datasets 1.18.3 - Tokenizers 0.11.0 ## BibTeX entry and citation info ``` @misc{tanvir2020estbert, title={EstBERT: A Pretrained Language-Specific BERT for Estonian}, author={Hasan Tanvir and Claudia Kittask and Kairit Sirts}, year={2020}, eprint={2011.04784}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
mradermacher/mpt-7b-GGUF
mradermacher
2024-09-09T09:31:25Z
13
0
transformers
[ "transformers", "gguf", "Composer", "MosaicML", "llm-foundry", "StreamingDatasets", "en", "dataset:mc4", "dataset:c4", "dataset:togethercomputer/RedPajama-Data-1T", "dataset:bigcode/the-stack", "dataset:allenai/s2orc", "base_model:mosaicml/mpt-7b", "base_model:quantized:mosaicml/mpt-7b", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-09-08T04:59:19Z
--- base_model: mosaicml/mpt-7b datasets: - mc4 - c4 - togethercomputer/RedPajama-Data-1T - bigcode/the-stack - allenai/s2orc language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - Composer - MosaicML - llm-foundry - StreamingDatasets --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/mosaicml/mpt-7b <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/mpt-7b-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/mpt-7b-GGUF/resolve/main/mpt-7b.Q2_K.gguf) | Q2_K | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/mpt-7b-GGUF/resolve/main/mpt-7b.IQ3_XS.gguf) | IQ3_XS | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/mpt-7b-GGUF/resolve/main/mpt-7b.IQ3_S.gguf) | IQ3_S | 3.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/mpt-7b-GGUF/resolve/main/mpt-7b.Q3_K_S.gguf) | Q3_K_S | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/mpt-7b-GGUF/resolve/main/mpt-7b.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/mpt-7b-GGUF/resolve/main/mpt-7b.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/mpt-7b-GGUF/resolve/main/mpt-7b.IQ4_XS.gguf) | IQ4_XS | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/mpt-7b-GGUF/resolve/main/mpt-7b.Q4_K_S.gguf) | Q4_K_S | 3.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/mpt-7b-GGUF/resolve/main/mpt-7b.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/mpt-7b-GGUF/resolve/main/mpt-7b.Q4_K_M.gguf) | Q4_K_M | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/mpt-7b-GGUF/resolve/main/mpt-7b.Q5_K_S.gguf) | Q5_K_S | 4.7 | | | [GGUF](https://huggingface.co/mradermacher/mpt-7b-GGUF/resolve/main/mpt-7b.Q5_K_M.gguf) | Q5_K_M | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/mpt-7b-GGUF/resolve/main/mpt-7b.Q6_K.gguf) | Q6_K | 5.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/mpt-7b-GGUF/resolve/main/mpt-7b.Q8_0.gguf) | Q8_0 | 7.2 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
KTH77/gemma-2b-it-bnb-4bit-KTH-2
KTH77
2024-09-09T09:26:59Z
78
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-09-09T09:19:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Intel/neural-chat-7b-v3-1
Intel
2024-09-09T09:24:54Z
152,514
546
transformers
[ "transformers", "pytorch", "safetensors", "mistral", "text-generation", "LLMs", "Intel", "conversational", "en", "dataset:Open-Orca/SlimOrca", "arxiv:2306.02707", "base_model:mistralai/Mistral-7B-v0.1", "base_model:finetune:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-14T07:03:44Z
--- license: apache-2.0 tags: - LLMs - mistral - Intel pipeline_tag: text-generation base_model: mistralai/Mistral-7B-v0.1 model-index: - name: neural-chat-7b-v3-1 results: - task: type: Large Language Model name: Large Language Model dataset: type: Open-Orca/SlimOrca name: Open-Orca/SlimOrca metrics: - type: ARC (25-shot) value: 66.21 name: ARC (25-shot) verified: true - type: HellaSwag (10-shot) value: 83.64 name: HellaSwag (10-shot) verified: true - type: MMLU (5-shot) value: 62.37 name: MMLU (5-shot) verified: true - type: TruthfulQA (0-shot) value: 59.65 name: TruthfulQA (0-shot) verified: true - type: Winogrande (5-shot) value: 78.14 name: Winogrande (5-shot) verified: true - type: GSM8K (5-shot) value: 19.56 name: GSM8K (5-shot) verified: true - type: DROP (3-shot) value: 43.84 name: DROP (3-shot) verified: true datasets: - Open-Orca/SlimOrca language: - en --- ## Model Details: Neural-Chat-v3-1 This model is a fine-tuned 7B parameter LLM on the Intel Gaudi 2 processor from the [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the open source dataset [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca). The model was aligned using the Direct Performance Optimization (DPO) method with [Intel/orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs). For more information, refer to the Medium article [The Practice of Supervised Fine-tuning and Direct Preference Optimization on Intel Gaudi2](https://medium.com/@NeuralCompressor/the-practice-of-supervised-finetuning-and-direct-preference-optimization-on-habana-gaudi2-a1197d8a3cd3). <p align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/6297f0e30bd2f58c647abb1d/ctASHUT5QYIxMsOFa-sHC.webp" width="500"/> Photo by Google DeepMind on Unsplash </p> | Model Detail | Description | | ----------- | ----------- | | Model Authors - Company | Intel. The NeuralChat team with members from DCAI/AISE/AIPT. Core team members: Kaokao Lv, Liang Lv, Chang Wang, Wenxin Zhang, Xuhui Ren, and Haihao Shen.| | Date | October, 2023 | | Version | v3-1 | | Type | 7B Large Language Model | | Paper or Other Resources | [Medium Blog](https://medium.com/@NeuralCompressor/the-practice-of-supervised-finetuning-and-direct-preference-optimization-on-habana-gaudi2-a1197d8a3cd3) | | License | Apache 2.0 | | Questions or Comments | [Community Tab](https://huggingface.co/Intel/neural-chat-7b-v3-1/discussions) and [Intel DevHub Discord](https://discord.gg/rv2Gp55UJQ)| | Intended Use | Description | | ----------- | ----------- | | Primary intended uses | You can use the fine-tuned model for several language-related tasks. Checkout the [LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) to see how this model is doing. | | Primary intended users | Anyone doing inference on language-related tasks. | | Out-of-scope uses | This model in most cases will need to be fine-tuned for your particular task. The model should not be used to intentionally create hostile or alienating environments for people.| ## How To Use Context length for this model: 8192 tokens (same as https://huggingface.co/mistralai/Mistral-7B-v0.1) ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-04 - train_batch_size: 1 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-HPU - num_devices: 8 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 2.0 ### Reproduce the model Here is the sample code to reproduce the model: [GitHub sample code](https://github.com/intel/intel-extension-for-transformers/blob/main/intel_extension_for_transformers/neural_chat/examples/finetuning/finetune_neuralchat_v3). Here is the documentation to reproduce building the model: ```bash git clone https://github.com/intel/intel-extension-for-transformers.git cd intel-extension-for-transformers docker build --no-cache ./ --target hpu --build-arg REPO=https://github.com/intel/intel-extension-for-transformers.git --build-arg ITREX_VER=main -f ./intel_extension_for_transformers/neural_chat/docker/Dockerfile -t chatbot_finetuning:latest docker run -it --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --net=host --ipc=host chatbot_finetuning:latest # after entering docker container cd examples/finetuning/finetune_neuralchat_v3 ``` We select the latest pretrained mistralai/Mistral-7B-v0.1 and the open source dataset Open-Orca/SlimOrca to conduct the experiment. The below script use deepspeed zero2 to lanuch the training with 8 cards Gaudi2. In the `finetune_neuralchat_v3.py`, the default `use_habana=True, use_lazy_mode=True, device="hpu"` for Gaudi2. And if you want to run it on NVIDIA GPU, you can set them `use_habana=False, use_lazy_mode=False, device="auto"`. ```python deepspeed --include localhost:0,1,2,3,4,5,6,7 \ --master_port 29501 \ finetune_neuralchat_v3.py ``` Merge the LoRA weights: ```python python apply_lora.py \ --base-model-path mistralai/Mistral-7B-v0.1 \ --lora-model-path finetuned_model/ \ --output-path finetuned_model_lora ``` ### FP32 Inference with Transformers ```python import transformers model_name = 'Intel/neural-chat-7b-v3-1' model = transformers.AutoModelForCausalLM.from_pretrained(model_name) tokenizer = transformers.AutoTokenizer.from_pretrained(model_name) def generate_response(system_input, user_input): # Format the input using the provided template prompt = f"### System:\n{system_input}\n### User:\n{user_input}\n### Assistant:\n" # Tokenize and encode the prompt inputs = tokenizer.encode(prompt, return_tensors="pt", add_special_tokens=False) # Generate a response outputs = model.generate(inputs, max_length=1000, num_return_sequences=1) response = tokenizer.decode(outputs[0], skip_special_tokens=True) # Extract only the assistant's response return response.split("### Assistant:\n")[-1] # Example usage system_input = "You are a math expert assistant. Your mission is to help users understand and solve various math problems. You should provide step-by-step solutions, explain reasonings and give the correct answer." user_input = "calculate 100 + 520 + 60" response = generate_response(system_input, user_input) print(response) # expected response """ To calculate the sum of 100, 520, and 60, we will follow these steps: 1. Add the first two numbers: 100 + 520 2. Add the result from step 1 to the third number: (100 + 520) + 60 Step 1: Add 100 and 520 100 + 520 = 620 Step 2: Add the result from step 1 to the third number (60) (620) + 60 = 680 So, the sum of 100, 520, and 60 is 680. """ ``` ### BF16 Inference with Intel Extension for Transformers and Intel Extension for Pytorch ```python from transformers import AutoTokenizer, TextStreamer import torch from intel_extension_for_transformers.transformers import AutoModelForCausalLM import intel_extension_for_pytorch as ipex model_name = "Intel/neural-chat-7b-v3-1" prompt = "Once upon a time, there existed a little girl," tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) inputs = tokenizer(prompt, return_tensors="pt").input_ids streamer = TextStreamer(tokenizer) model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16) model = ipex.optimize(model.eval(), dtype=torch.bfloat16, inplace=True, level="O1", auto_kernel_selection=True) outputs = model.generate(inputs, streamer=streamer, max_new_tokens=300) ``` ### INT4 Inference with Transformers and Intel Extension for Transformers ```python from transformers import AutoTokenizer, TextStreamer from intel_extension_for_transformers.transformers import AutoModelForCausalLM, WeightOnlyQuantConfig model_name = "Intel/neural-chat-7b-v3-1" # for int8, should set weight_dtype="int8" config = WeightOnlyQuantConfig(compute_dtype="bf16", weight_dtype="int4") prompt = "Once upon a time, there existed a little girl," tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) inputs = tokenizer(prompt, return_tensors="pt").input_ids streamer = TextStreamer(tokenizer) model = AutoModelForCausalLM.from_pretrained(model_name, quantization_config=config) outputs = model.generate(inputs, streamer=streamer, max_new_tokens=300) ``` | Factors | Description | | ----------- | ----------- | | Groups | More details about the dataset and annotations can be found at [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca) and the associated paper at https://arxiv.org/abs/2306.02707. | | Instrumentation | The performance of the model can vary depending on the inputs to the model. In this case, the prompts provided can drastically change the prediction of the language model. | | Environment | The model was trained on the Intel Gaudi 2 processor (8 cards). | | Card Prompts | Model deployment on alternate hardware and software will change model performance. The model evaluation factors are from the Hugging Face LLM leaderboard: ARC, HellaSwag, MMLU, TruthfulQA, Winogrande, GSM8K, and DROP (see Quantitative Analyses below). | | Metrics | Description | | ----------- | ----------- | | Model performance measures | The model performance was evaluated against other LLMs according to the measures on the LLM leaderboard. These were selected as this has become the standard for LLM performance. | | Decision thresholds | No decision thresholds were used. | | Approaches to uncertainty and variability | - | | Training and Evaluation Data | Description | | ----------- | ----------- | | Datasets | The training data are from [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca). There is no contamination from the GSM8k test set, as this is not a part of the Open-Orca/SlimOrca dataset.| | Motivation | - | | Preprocessing | - | ## Quantitative Analyses The model was submitted to the [LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The detailed submission can be found here: [https://huggingface.co/datasets/open-llm-leaderboard/details_Intel__neural-chat-7b-v3-1](https://huggingface.co/datasets/open-llm-leaderboard/details_Intel__neural-chat-7b-v3-1). The metrics can be found below and show that the model has significantly improved performance from Mistral-7B-v0.1 and neural-chat-7b-v3. | Model | Average ⬆️| ARC (25-s) ⬆️ | HellaSwag (10-s) ⬆️ | MMLU (5-s) ⬆️| TruthfulQA (MC) (0-s) ⬆️ | Winogrande (5-s) | GSM8K (5-s) | DROP (3-s) | | --- | --- | --- | --- | --- | --- | --- | --- | --- | |[mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | 50.32 | 59.58 | 83.31 | 64.16 | 42.15 | 78.37 | 18.12 | 6.14 | | [Intel/neural-chat-7b-v3](https://huggingface.co/Intel/neural-chat-7b-v3) | **57.31** | 67.15 | 83.29 | 62.26 | 58.77 | 78.06 | 1.21 | 50.43 | | [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1) | **59.06** | 66.21 | 83.64 | 62.37 | 59.65 | 78.14 | 19.56 | 43.84 | ## Testing Model Quantizability The following code block can be run to determine, for PyTorch models, if that model is amenable to quantization. One caveat - the Intel Extension for PyTorch uses optimum ipex, which is pre-release and needs further testing. To install the dependencies, you should first install Intel Extensions for PyTorch and tehn pip install each of the following dependencies: - torch - optimum.intel - optimum[ipex] - transformers ### Intel Extension for PyTorch method: In this case, we are testing if neural-chat-7b-v3-1 can be quantized and this testing method demonstrates the model size change, for example: when the base type is specified to be torch.bfloat16 but also specifying that load_in_4bit=True which causes the weights only to be quantized we see an output from the model testing as follows: - **model_quantize_internal: model size = 27625.02 MB** - **model_quantize_internal: quant size = 4330.80 MB** This code should run from within a python script - such as ipex_test.py as follows: ```python import torch import os from transformers import AutoTokenizer from intel_extension_for_transformers.transformers import AutoModelForCausalLM, pipeline model_name = "Intel/neural-chat-7b-v3-1" prompt = "Once upon a time, there existed a little girl," tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) inputs = tokenizer(prompt, return_tensors="pt").input_ids result = {torch.bfloat16:"failed"} typ = torch.bfloat16 try: model = AutoModelForCausalLM.from_pretrained(model_name, load_in_4bit=True, torch_dtype = typ) outputs = model.generate(inputs, max_new_tokens=20) result[typ] = f"passed, {os.stat(model.bin_file).st_size}" except: result[typ] = "failed" print("\n\nResults of quantizing: ") # determine if Quantized with open(r"output.log", 'r') as fp: for l_no, line in enumerate(fp): # search string if 'model_quantize_internal' in line: print(line) print("\n\nExecution results ") for k,v in result.items(): print(k,v) print("\n\nModel Output: ") tokenizer.decode(outputs[0], skip_special_tokens=True).strip() ``` Run the code as folows from a bash terminal: ```bash python ipex_test.py 2>&1 | tee output.log ``` The entire output is captured in the output.log but it will be summarized, along with output from the model indicating either pass or fail of the quantization as well as model output for a given prompt. ## Ethical Considerations and Limitations Neural-chat-7b-v3-1 can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Therefore, before deploying any applications of neural-chat-7b-v3-1, developers should perform safety testing. ## Caveats and Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. Here are a couple of useful links to learn more about Intel's AI software: * Intel Neural Compressor [link](https://github.com/intel/neural-compressor) * Intel Extension for Transformers [link](https://github.com/intel/intel-extension-for-transformers) ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
Binarybardakshat/SVLM
Binarybardakshat
2024-09-09T09:24:50Z
112
0
transformers
[ "transformers", "tf", "safetensors", "bart", "text2text-generation", "code", "en", "dataset:Binarybardakshat/SVLM-ACL-DATASET", "license:openrail", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-08-25T10:58:39Z
--- license: openrail datasets: - Binarybardakshat/SVLM-ACL-DATASET language: - en library_name: transformers tags: - code --- # SVLM: A Question-Answering Model for ACL Research Papers This model, `SVLM`, is designed to answer questions based on research papers from the ACL dataset. It leverages the BART architecture to generate precise answers from scientific abstracts. ## Model Details - **Model Architecture:** BART (Bidirectional and Auto-Regressive Transformers) - **Framework:** TensorFlow - **Dataset:** [Binarybardakshat/SVLM-ACL-DATASET](https://huggingface.co/datasets/Binarybardakshat/SVLM-ACL-DATASET) - **Author:** @binarybard (Akshat Shukla) - **Purpose:** The model is trained to provide answers to questions from the ACL research paper dataset. ## Usage To use this model with the Hugging Face Interface API: ```python from transformers import AutoTokenizer, TFAutoModelForSeq2SeqLM # Load the model and tokenizer tokenizer = AutoTokenizer.from_pretrained("Binarybardakshat/SVLM") model = TFAutoModelForSeq2SeqLM.from_pretrained("Binarybardakshat/SVLM") # Example input input_text = "What is the main contribution of the paper titled 'Your Paper Title'?" # Tokenize input inputs = tokenizer(input_text, return_tensors="tf", padding=True, truncation=True) # Generate answer outputs = model.generate(inputs.input_ids, max_length=50, num_beams=5, early_stopping=True) answer = tokenizer.decode(outputs[0], skip_special_tokens=True) print("Answer:", answer)
Shakker-Labs/FilmPortrait
Shakker-Labs
2024-09-09T09:21:59Z
651
200
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "image-generation", "flux", "safetensors", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:finetune:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-09-08T17:49:50Z
--- tags: - text-to-image - stable-diffusion - diffusers - image-generation - flux - safetensors widget: - text: A young asian girl, filmfotos,film grain, reversal film photography output: url: images/1.png - text: >- A young Japanese girl, profile, blue hours, Tokyo tower, filmfotos,film grain, reversal film photography output: url: images/2.png - text: Tokyo street photography, filmfotos,film grain, reversal film photography output: url: images/3.png base_model: black-forest-labs/FLUX.1-dev instance_prompt: filmfotos, film grain, reversal film photography license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en library_name: diffusers --- # FilmPortrait <div class="container"> <img src="./poster.jpeg" width="1024"/> </div> FilmPortrait is a LoRA model finetuned on FLUX.1-dev, specifically designed to enhance the film texture. It embodies a subdued, low-saturation color palette reminiscent of classic Japanese cinema, which is particularly evident in its portrayal of characters (with a subtle bias towards Asian features), serene still lifes, and sweeping landscapes. The model delivers an exceptional aesthetic experience, capturing the essence of a bygone era with modern precision. <div class="container"> <img src="./cover.jpeg" width="1024"/> </div> ## Comparison The following example shows a simple comparison with FLUX.1-dev under the same parameter setting. <div class="container"> <img src="./comparison.png" width="1024"/> </div> ## Trigger words You should use `filmfotos, film grain, reversal film photography` to trigger the image generation. ## Inference ```python import torch from diffusers import FluxPipeline pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16) pipe.load_lora_weights('Shakker-Labs/FilmPortrait', weight_name='filmfotos.safetensors') pipe.fuse_lora(lora_scale=0.9) pipe.to("cuda") prompt = "a young girl, filmfotos, film grain, reversal film photography" image = pipe(prompt, num_inference_steps=24, guidance_scale=3.5, width=768, height=1024, ).images[0] image.save(f"example.png") ``` ## Online Inference You can also download this model at [Shakker AI](https://www.shakker.ai/modelinfo/ec983ff3497d46ea977dbfcd1d989f67?from=search), where we provide an online interface to generate images. ## Acknowledgements This model is trained by our copyrighted users [DynamicWang](https://www.shakker.ai/userpage/dfca7abc67c04a9492ea738d864de070/publish). We release this model under permissions. The model follows [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
punzel/flux_gal_gadot
punzel
2024-09-09T08:45:09Z
66
3
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
2024-09-09T03:19:07Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: '-' output: url: images/ComfyUI_Flux_Finetune_00108_.png - text: '-' output: url: images/ComfyUI_Flux_Finetune_00101_.png - text: '-' output: url: images/ComfyUI_Flux_Finetune_00103_.png - text: '-' output: url: images/ComfyUI_Flux_Finetune_00104_.png base_model: black-forest-labs/FLUX.1-dev instance_prompt: null --- # Gal Gadot <Gallery /> ## Model description This LoRA was trained on 25 images of Gal Gadot using SimpleTuner for 1600 steps. A trigger word is not required ## Download model Weights for this model are available in Safetensors format. [Download](/punzel/flux_gal_gadot/tree/main) them in the Files & versions tab.
crynux-ai/stable-diffusion-v1-5
crynux-ai
2024-09-09T08:43:47Z
560,806
2
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-09-05T01:57:02Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
deepghs/ccip_onnx
deepghs
2024-09-09T08:34:16Z
0
6
null
[ "onnx", "art", "zero-shot-image-classification", "dataset:deepghs/character_similarity", "base_model:deepghs/ccip", "base_model:quantized:deepghs/ccip", "license:openrail", "region:us" ]
zero-shot-image-classification
2023-06-25T05:11:05Z
--- license: openrail datasets: - deepghs/character_similarity metrics: - f1 pipeline_tag: zero-shot-image-classification tags: - art base_model: deepghs/ccip --- | Model | F1 Score | Precision | Recall | Threshold | Cluster_2 | Cluster_Free | |:-----------------------------------:|:----------:|:-----------:|:--------:|:-----------:|:-----------:|:--------------:| | ccip-caformer_b36-24 | 0.940925 | 0.938254 | 0.943612 | 0.213231 | 0.89508 | 0.957017 | | ccip-caformer-24-randaug-pruned | 0.917211 | 0.933481 | 0.901499 | 0.178475 | 0.890366 | 0.922375 | | ccip-v2-caformer_s36-10 | 0.906422 | 0.932779 | 0.881513 | 0.207757 | 0.874592 | 0.89241 | | ccip-caformer-6-randaug-pruned_fp32 | 0.878403 | 0.893648 | 0.863669 | 0.195122 | 0.810176 | 0.897904 | | ccip-caformer-5_fp32 | 0.864363 | 0.90155 | 0.830121 | 0.183973 | 0.792051 | 0.862289 | | ccip-caformer-4_fp32 | 0.844967 | 0.870553 | 0.820842 | 0.18367 | 0.795565 | 0.868133 | | ccip-caformer_query-12 | 0.823928 | 0.871122 | 0.781585 | 0.141308 | 0.787237 | 0.809426 | | ccip-caformer-23_randaug_fp32 | 0.81625 | 0.854134 | 0.781585 | 0.136797 | 0.745697 | 0.8068 | | ccip-caformer-2-randaug-pruned_fp32 | 0.78561 | 0.800148 | 0.771592 | 0.171053 | 0.686617 | 0.728195 | | ccip-caformer-2_fp32 | 0.755125 | 0.790172 | 0.723055 | 0.141275 | 0.64977 | 0.718516 | * The calculation of `F1 Score`, `Precision`, and `Recall` considers "the characters in both images are the same" as a positive case. `Threshold` is determined by finding the maximum value on the F1 Score curve. * `Cluster_2` represents the approximate optimal clustering solution obtained by tuning the eps value in DBSCAN clustering algorithm with min_samples set to `2`, and evaluating the similarity between the obtained clusters and the true distribution using the `random_adjust_score`. * `Cluster_Free` represents the approximate optimal solution obtained by tuning the `max_eps` and `min_samples` values in the OPTICS clustering algorithm, and evaluating the similarity between the obtained clusters and the true distribution using the `random_adjust_score`.
deepghs/ccip
deepghs
2024-09-09T08:33:37Z
0
6
dghs-imgutils
[ "dghs-imgutils", "art", "anime", "character", "zero-shot-image-classification", "en", "ja", "zh", "dataset:deepghs/character_similarity", "dataset:deepghs/character_index", "license:openrail", "region:us" ]
zero-shot-image-classification
2023-05-15T11:03:28Z
--- datasets: - deepghs/character_similarity - deepghs/character_index metrics: - f1 - adjust_random_score language: - en - ja - zh pipeline_tag: zero-shot-image-classification library_name: dghs-imgutils tags: - art - anime - character license: openrail --- # CCIP CCIP(Contrastive Anime Character Image Pre-Training) is a model to calculuate the visual similarity between anime characters in two images. (limited to images containing only a single anime character). More similar the characters between two images are, higher score it should have. # Usage Using CCIP with [imgutils](https://dghs-imgutils.deepghs.org/main/tutorials/installation/index.html) ![](https://dghs-imgutils.deepghs.org/main/_images/ccip_small.plot.py.svg) Calculuate character similarity between images: ``` from imgutils.metrics import ccip_batch_differences ccip_batch_differences(['ccip/1.jpg', 'ccip/2.jpg', 'ccip/6.jpg', 'ccip/7.jpg']) array([[6.5350548e-08, 1.6583106e-01, 4.2947042e-01, 4.0375218e-01], [1.6583106e-01, 9.8025822e-08, 4.3715334e-01, 4.0748104e-01], [4.2947042e-01, 4.3715334e-01, 3.2675274e-08, 3.9229470e-01], [4.0375218e-01, 4.0748104e-01, 3.9229470e-01, 6.5350548e-08]], dtype=float32) ``` [More detailed instruction](https://dghs-imgutils.deepghs.org/main/api_doc/metrics/ccip.html) # Performence | Model | F1 Score | Precision | Recall | Threshold | Cluster_2 | Cluster_Free | |:-----------------------------------:|:----------:|:-----------:|:--------:|:-----------:|:-----------:|:--------------:| | ccip-caformer_b36-24 | 0.940925 | 0.938254 | 0.943612 | 0.213231 | 0.89508 | 0.957017 | | ccip-caformer-24-randaug-pruned | 0.917211 | 0.933481 | 0.901499 | 0.178475 | 0.890366 | 0.922375 | | ccip-v2-caformer_s36-10 | 0.906422 | 0.932779 | 0.881513 | 0.207757 | 0.874592 | 0.89241 | | ccip-caformer-6-randaug-pruned_fp32 | 0.878403 | 0.893648 | 0.863669 | 0.195122 | 0.810176 | 0.897904 | | ccip-caformer-5_fp32 | 0.864363 | 0.90155 | 0.830121 | 0.183973 | 0.792051 | 0.862289 | | ccip-caformer-4_fp32 | 0.844967 | 0.870553 | 0.820842 | 0.18367 | 0.795565 | 0.868133 | | ccip-caformer_query-12 | 0.823928 | 0.871122 | 0.781585 | 0.141308 | 0.787237 | 0.809426 | | ccip-caformer-23_randaug_fp32 | 0.81625 | 0.854134 | 0.781585 | 0.136797 | 0.745697 | 0.8068 | | ccip-caformer-2-randaug-pruned_fp32 | 0.78561 | 0.800148 | 0.771592 | 0.171053 | 0.686617 | 0.728195 | | ccip-caformer-2_fp32 | 0.755125 | 0.790172 | 0.723055 | 0.141275 | 0.64977 | 0.718516 | * The calculation of `F1 Score`, `Precision`, and `Recall` considers "the characters in both images are the same" as a positive case. `Threshold` is determined by finding the maximum value on the F1 Score curve. * `Cluster_2` represents the approximate optimal clustering solution obtained by tuning the eps value in DBSCAN clustering algorithm with min_samples set to `2`, and evaluating the similarity between the obtained clusters and the true distribution using the `random_adjust_score`. * `Cluster_Free` represents the approximate optimal solution obtained by tuning the `max_eps` and `min_samples` values in the OPTICS clustering algorithm, and evaluating the similarity between the obtained clusters and the true distribution using the `random_adjust_score`. ![operations benchmark](https://dghs-imgutils.deepghs.org/main/_images/ccip_benchmark.plot.py.svg) # Citation ```bibtex @misc{CCIP, title={Contrastive Anime Character Image Pre-Training}, author={Ziyi Dong and narugo1992}, year={2024}, howpublished={\url{https://huggingface.co/deepghs/ccip}} } ```
RichardErkhov/liminerity_-_Blur-7b-slerp-v1.41-gguf
RichardErkhov
2024-09-09T08:26:51Z
14
0
null
[ "gguf", "endpoints_compatible", "region:us" ]
null
2024-09-09T03:09:58Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Blur-7b-slerp-v1.41 - GGUF - Model creator: https://huggingface.co/liminerity/ - Original model: https://huggingface.co/liminerity/Blur-7b-slerp-v1.41/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Blur-7b-slerp-v1.41.Q2_K.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.41-gguf/blob/main/Blur-7b-slerp-v1.41.Q2_K.gguf) | Q2_K | 2.53GB | | [Blur-7b-slerp-v1.41.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.41-gguf/blob/main/Blur-7b-slerp-v1.41.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [Blur-7b-slerp-v1.41.IQ3_S.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.41-gguf/blob/main/Blur-7b-slerp-v1.41.IQ3_S.gguf) | IQ3_S | 2.96GB | | [Blur-7b-slerp-v1.41.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.41-gguf/blob/main/Blur-7b-slerp-v1.41.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [Blur-7b-slerp-v1.41.IQ3_M.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.41-gguf/blob/main/Blur-7b-slerp-v1.41.IQ3_M.gguf) | IQ3_M | 3.06GB | | [Blur-7b-slerp-v1.41.Q3_K.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.41-gguf/blob/main/Blur-7b-slerp-v1.41.Q3_K.gguf) | Q3_K | 3.28GB | | [Blur-7b-slerp-v1.41.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.41-gguf/blob/main/Blur-7b-slerp-v1.41.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [Blur-7b-slerp-v1.41.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.41-gguf/blob/main/Blur-7b-slerp-v1.41.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [Blur-7b-slerp-v1.41.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.41-gguf/blob/main/Blur-7b-slerp-v1.41.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [Blur-7b-slerp-v1.41.Q4_0.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.41-gguf/blob/main/Blur-7b-slerp-v1.41.Q4_0.gguf) | Q4_0 | 3.83GB | | [Blur-7b-slerp-v1.41.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.41-gguf/blob/main/Blur-7b-slerp-v1.41.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [Blur-7b-slerp-v1.41.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.41-gguf/blob/main/Blur-7b-slerp-v1.41.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [Blur-7b-slerp-v1.41.Q4_K.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.41-gguf/blob/main/Blur-7b-slerp-v1.41.Q4_K.gguf) | Q4_K | 4.07GB | | [Blur-7b-slerp-v1.41.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.41-gguf/blob/main/Blur-7b-slerp-v1.41.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [Blur-7b-slerp-v1.41.Q4_1.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.41-gguf/blob/main/Blur-7b-slerp-v1.41.Q4_1.gguf) | Q4_1 | 4.24GB | | [Blur-7b-slerp-v1.41.Q5_0.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.41-gguf/blob/main/Blur-7b-slerp-v1.41.Q5_0.gguf) | Q5_0 | 4.65GB | | [Blur-7b-slerp-v1.41.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.41-gguf/blob/main/Blur-7b-slerp-v1.41.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [Blur-7b-slerp-v1.41.Q5_K.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.41-gguf/blob/main/Blur-7b-slerp-v1.41.Q5_K.gguf) | Q5_K | 4.78GB | | [Blur-7b-slerp-v1.41.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.41-gguf/blob/main/Blur-7b-slerp-v1.41.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [Blur-7b-slerp-v1.41.Q5_1.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.41-gguf/blob/main/Blur-7b-slerp-v1.41.Q5_1.gguf) | Q5_1 | 5.07GB | | [Blur-7b-slerp-v1.41.Q6_K.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.41-gguf/blob/main/Blur-7b-slerp-v1.41.Q6_K.gguf) | Q6_K | 5.53GB | | [Blur-7b-slerp-v1.41.Q8_0.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.41-gguf/blob/main/Blur-7b-slerp-v1.41.Q8_0.gguf) | Q8_0 | 7.17GB | Original model description: --- license: apache-2.0 tags: - merge - mergekit - lazymergekit - bardsai/jaskier-7b-dpo-v5.6 - liminerity/merge model-index: - name: Blur-7b-slerp-v1.41 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 72.78 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-slerp-v1.41 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 88.65 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-slerp-v1.41 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.84 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-slerp-v1.41 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 74.23 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-slerp-v1.41 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 83.9 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-slerp-v1.41 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 71.49 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-slerp-v1.41 name: Open LLM Leaderboard --- # Blur-7b-slerp-v1.41 Blur-7b-slerp-v1.41 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [bardsai/jaskier-7b-dpo-v5.6](https://huggingface.co/bardsai/jaskier-7b-dpo-v5.6) * [liminerity/merge](https://huggingface.co/liminerity/merge) ## 🧩 Configuration ```yaml slices: - sources: - model: bardsai/jaskier-7b-dpo-v5.6 layer_range: [0, 32] - model: liminerity/merge layer_range: [0, 32] merge_method: slerp base_model: bardsai/jaskier-7b-dpo-v5.6 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: float16 #im pretty sure this will be bricked so ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_liminerity__Blur-7b-slerp-v1.41) | Metric |Value| |---------------------------------|----:| |Avg. |75.98| |AI2 Reasoning Challenge (25-Shot)|72.78| |HellaSwag (10-Shot) |88.65| |MMLU (5-Shot) |64.84| |TruthfulQA (0-shot) |74.23| |Winogrande (5-shot) |83.90| |GSM8k (5-shot) |71.49|
WpythonW/RUbert-tiny_custom_CE-v2
WpythonW
2024-09-09T08:26:07Z
105
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "cross-encoder", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-09-09T08:26:00Z
--- library_name: transformers tags: - cross-encoder --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
John6666/lemontart-v25-sdxl
John6666
2024-09-09T08:22:13Z
61
1
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "2.5D", "2D", "cute", "pony", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-09-09T08:17:29Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - 2.5D - 2D - cute - pony --- Original model is [here](https://civitai.com/models/641379?modelVersionId=810987). This model created by [Random992](https://civitai.com/user/Random992).
seongyeon1/nursinghome-finetuned-gemma
seongyeon1
2024-09-09T08:20:23Z
118
0
transformers
[ "transformers", "pytorch", "gemma", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/gemma-2b-it-bnb-4bit", "base_model:finetune:unsloth/gemma-2b-it-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-09-09T08:16:22Z
--- base_model: unsloth/gemma-2b-it-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma - trl --- # Uploaded model - **Developed by:** seongyeon1 - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-2b-it-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Saitun/babydragon
Saitun
2024-09-09T08:16:16Z
7
1
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:stable-diffusion-v1-5/stable-diffusion-v1-5", "base_model:adapter:stable-diffusion-v1-5/stable-diffusion-v1-5", "license:openrail", "region:us" ]
text-to-image
2024-09-09T08:14:27Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: '-' output: url: images/2024-09-09 14.41.44.jpg base_model: stable-diffusion-v1-5/stable-diffusion-v1-5 instance_prompt: null license: openrail --- # Babydragon <Gallery /> ## Model description baby dragon, dragon ## Download model Weights for this model are available in Safetensors format. [Download](/Saitun/babydragon/tree/main) them in the Files & versions tab.
mradermacher/pythia-6.9b-deduped-i1-GGUF
mradermacher
2024-09-09T07:57:00Z
251
0
transformers
[ "transformers", "gguf", "pytorch", "causal-lm", "pythia", "en", "dataset:EleutherAI/the_pile_deduplicated", "base_model:EleutherAI/pythia-6.9b-deduped", "base_model:quantized:EleutherAI/pythia-6.9b-deduped", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix" ]
null
2024-09-08T13:52:55Z
--- base_model: EleutherAI/pythia-6.9b-deduped datasets: - EleutherAI/the_pile_deduplicated language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - pytorch - causal-lm - pythia --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/EleutherAI/pythia-6.9b-deduped <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/pythia-6.9b-deduped-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/pythia-6.9b-deduped-i1-GGUF/resolve/main/pythia-6.9b-deduped.i1-IQ1_S.gguf) | i1-IQ1_S | 1.6 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/pythia-6.9b-deduped-i1-GGUF/resolve/main/pythia-6.9b-deduped.i1-IQ1_M.gguf) | i1-IQ1_M | 1.8 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/pythia-6.9b-deduped-i1-GGUF/resolve/main/pythia-6.9b-deduped.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.0 | | | [GGUF](https://huggingface.co/mradermacher/pythia-6.9b-deduped-i1-GGUF/resolve/main/pythia-6.9b-deduped.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.2 | | | [GGUF](https://huggingface.co/mradermacher/pythia-6.9b-deduped-i1-GGUF/resolve/main/pythia-6.9b-deduped.i1-IQ2_S.gguf) | i1-IQ2_S | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/pythia-6.9b-deduped-i1-GGUF/resolve/main/pythia-6.9b-deduped.i1-IQ2_M.gguf) | i1-IQ2_M | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/pythia-6.9b-deduped-i1-GGUF/resolve/main/pythia-6.9b-deduped.i1-Q2_K.gguf) | i1-Q2_K | 2.7 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/pythia-6.9b-deduped-i1-GGUF/resolve/main/pythia-6.9b-deduped.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/pythia-6.9b-deduped-i1-GGUF/resolve/main/pythia-6.9b-deduped.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/pythia-6.9b-deduped-i1-GGUF/resolve/main/pythia-6.9b-deduped.i1-IQ3_S.gguf) | i1-IQ3_S | 3.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/pythia-6.9b-deduped-i1-GGUF/resolve/main/pythia-6.9b-deduped.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.1 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/pythia-6.9b-deduped-i1-GGUF/resolve/main/pythia-6.9b-deduped.i1-IQ3_M.gguf) | i1-IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/pythia-6.9b-deduped-i1-GGUF/resolve/main/pythia-6.9b-deduped.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.7 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/pythia-6.9b-deduped-i1-GGUF/resolve/main/pythia-6.9b-deduped.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/pythia-6.9b-deduped-i1-GGUF/resolve/main/pythia-6.9b-deduped.i1-Q4_0.gguf) | i1-Q4_0 | 4.0 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/pythia-6.9b-deduped-i1-GGUF/resolve/main/pythia-6.9b-deduped.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.0 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/pythia-6.9b-deduped-i1-GGUF/resolve/main/pythia-6.9b-deduped.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.1 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/pythia-6.9b-deduped-i1-GGUF/resolve/main/pythia-6.9b-deduped.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/pythia-6.9b-deduped-i1-GGUF/resolve/main/pythia-6.9b-deduped.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/pythia-6.9b-deduped-i1-GGUF/resolve/main/pythia-6.9b-deduped.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/pythia-6.9b-deduped-i1-GGUF/resolve/main/pythia-6.9b-deduped.i1-Q6_K.gguf) | i1-Q6_K | 5.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
gzlixiaochao/Llama-3.1-8B-bnb-4bit-wenyanwen
gzlixiaochao
2024-09-09T07:54:32Z
6
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation-inference", "unsloth", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-09-09T07:12:29Z
--- base_model: unsloth/meta-llama-3.1-8b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** gzlixiaochao - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
QuantFactory/Llama-3.1-8B-ArliAI-Indo-Formax-v1.0-GGUF
QuantFactory
2024-09-09T07:51:11Z
45
1
null
[ "gguf", "license:llama3.1", "endpoints_compatible", "region:us", "conversational" ]
null
2024-09-09T07:11:54Z
--- license: llama3.1 --- ![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ) # QuantFactory/Llama-3.1-8B-ArliAI-Indo-Formax-v1.0-GGUF This is quantized version of [ArliAI/Llama-3.1-8B-ArliAI-Indo-Formax-v1.0](https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-Indo-Formax-v1.0) created using llama.cpp # Original Model Card Based on Meta-Llama-3.1-8B-Instruct, and is governed by Meta Llama 3.1 License agreement: https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct Indonesian language continued pre-trained and Formax instruct tuned model. Excels in Bahasa Indonesia while having Formax instruct characteristics. You can try it at https://www.arliai.com Model Llama 3.1 yang telat di-training dengan bahasa Indonesia dan juga menggunakan dataset bergaya Formax bahasa Indonesia. Cocok untuk kebutuhan text bahasa Indonesia. Bisa dicoba di website https://www.arliai.com/?lang=id Training: - 8192 sequence length - Training duration is around 6 days on 2x3090Ti - 1 epoch training with a massive dataset. - LORA with 64-rank 128-alpha resulting in ~2% trainable weights. Quants: BF16: https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-Indo-Formax-v1.0 GGUF: https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-Indo-Formax-v1.0-GGUF Suggested prompting strategy: ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> You are a [give it a role]. You are tasked with [give it a task]. Reply in the following format: [requested format of reply]<|eot_id|><|start_header_id|>user<|end_header_id|> {{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|> {{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|> {{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` Example prompt: ``` System: Anda adalah seorang penulis profesional. Anda harus membuat cerita yang memikat dan menarik untuk dibaca. Balas dengan format berikut:\n\nJudul: [Buat judul untuk cerita]\n\n Cerita: Di bawah matahari Azgul, [lanjutkan cerita] User: Buatlah sebuah cerita untukku tentang petualangan di dunia fantasi. Response: Judul: Pencarian Ksatria yang Hilang\n\nCerita: Di bawah matahari Azgul, tempat langit berwarna biru tua dan tanah berwarna hijau zamrud, kota El'goroth berdiri sebagai mercusuar harapan. Di jantung kota ini, di dalam tembok kastil yang megah, terdapat sebuah legenda yang telah lama terlupakan – legenda tentang Ksatria yang Hilang, seorang pahlawan yang dikatakan telah mengalahkan kegelapan itu sendiri.\n\nEira, seorang wanita muda yang memiliki rambut emas dan mata biru, tinggal di kota ini. Dia adalah seorang penyihir yang terampil, dengan kemampuan untuk berkomunikasi dengan makhluk-makhluk yang tidak diketahui. Suatu hari, saat menjelajahi hutan terdekat, dia menemukan sebuah buku kuno yang tersembunyi di balik dedaunan. Buku itu berisi petunjuk tentang Ksatria yang Hilang, dan Eira merasakan bahwa dia harus memulai pencarian untuk menemukannya.\n\nDia memulai perjalanannya, didorong oleh rasa haus akan petualangan dan keinginan untuk membuktikan dirinya. Saat dia menjelajah lebih jauh ke dalam hutan, dia bertemu dengan berbagai makhluk, beberapa di antaranya menjadi sekutu, sementara yang lain menjadi musuh. Dia bertemu dengan seekor naga yang bijaksana, yang memberinya petunjuk tentang lokasi Ksatria yang Hilang. Dia juga bertemu dengan sekelompok penyihir jahat, yang berusaha menghentikan pencariannya.\n\nMeskipun ada bahaya, Eira terus maju, didorong oleh tekadnya untuk menemukan Ksatria yang Hilang. Akhirnya, dia mencapai sebuah gua tersembunyi, tempat Ksatria yang Hilang bersembunyi. Namun, saat dia masuk ke dalam gua, dia menemukan bahwa Ksatria yang Hilang bukanlah pahlawan yang dia kira. Sebaliknya, dia adalah seorang penyihir jahat yang telah menggunakan kekuatannya untuk mengendalikan kota El'goroth.\n\nEira terkejut dan marah, tapi dia tahu dia harus bertindak cepat. Dia menggunakan kekuatannya untuk mengalahkan Ksatria yang Hilang dan menghancurkan kutukan yang telah menghantui kota selama bertahun-tahun. Setelah kejadian itu, Eira kembali ke kota, di mana dia diterima sebagai pahlawan. Dia menggunakan kekuatannya untuk melindungi kota dan masyarakatnya, dan legenda tentang Ksatria yang Hilang menjadi kisah yang mengingatkan akan bahaya kekuasaan yang tidak terkendali. Pada akhirnya, Eira menyadari bahwa pencariannya akan Ksatria yang Hilang telah mengubah hidupnya selamanya. Dia telah menemukan tujuan dan kekuatan dalam dirinya, dan dia tahu bahwa dia akan selalu berdiri sebagai penjaga kota dan masyarakatnya. ``` ``` System: Your are an intelligent AI assistant that can process texts. Always reply in json formatting when possible. User: Berikut ini adalah sebuah cerita pendek, identifikasikan semua karakter di dalamnya.\n\nJudul: Pencarian Ksatria yang Hilang\n\nCerita: Di bawah matahari Azgul, tempat langit berwarna biru tua dan tanah berwarna hijau zamrud, kota El'goroth berdiri sebagai mercusuar harapan. Di jantung kota ini, di dalam tembok kastil yang megah, terdapat sebuah legenda yang telah lama terlupakan – legenda tentang Ksatria yang Hilang, seorang pahlawan yang dikatakan telah mengalahkan kegelapan itu sendiri.\n\nEira, seorang wanita muda yang memiliki rambut emas dan mata biru, tinggal di kota ini. Dia adalah seorang penyihir yang terampil, dengan kemampuan untuk berkomunikasi dengan makhluk-makhluk yang tidak diketahui. Suatu hari, saat menjelajahi hutan terdekat, dia menemukan sebuah buku kuno yang tersembunyi di balik dedaunan. Buku itu berisi petunjuk tentang Ksatria yang Hilang, dan Eira merasakan bahwa dia harus memulai pencarian untuk menemukannya.\n\nDia memulai perjalanannya, didorong oleh rasa haus akan petualangan dan keinginan untuk membuktikan dirinya. Saat dia menjelajah lebih jauh ke dalam hutan, dia bertemu dengan berbagai makhluk, beberapa di antaranya menjadi sekutu, sementara yang lain menjadi musuh. Dia bertemu dengan seekor naga yang bijaksana, yang memberinya petunjuk tentang lokasi Ksatria yang Hilang. Dia juga bertemu dengan sekelompok penyihir jahat, yang berusaha menghentikan pencariannya.\n\nMeskipun ada bahaya, Eira terus maju, didorong oleh tekadnya untuk menemukan Ksatria yang Hilang. Akhirnya, dia mencapai sebuah gua tersembunyi, tempat Ksatria yang Hilang bersembunyi. Namun, saat dia masuk ke dalam gua, dia menemukan bahwa Ksatria yang Hilang bukanlah pahlawan yang dia kira. Sebaliknya, dia adalah seorang penyihir jahat yang telah menggunakan kekuatannya untuk mengendalikan kota El'goroth.\n\nEira terkejut dan marah, tapi dia tahu dia harus bertindak cepat. Dia menggunakan kekuatannya untuk mengalahkan Ksatria yang Hilang dan menghancurkan kutukan yang telah menghantui kota selama bertahun-tahun. Setelah kejadian itu, Eira kembali ke kota, di mana dia diterima sebagai pahlawan. Dia menggunakan kekuatannya untuk melindungi kota dan masyarakatnya, dan legenda tentang Ksatria yang Hilang menjadi kisah yang mengingatkan akan bahaya kekuasaan yang tidak terkendali. Pada akhirnya, Eira menyadari bahwa pencariannya akan Ksatria yang Hilang telah mengubah hidupnya selamanya. Dia telah menemukan tujuan dan kekuatan dalam dirinya, dan dia tahu bahwa dia akan selalu berdiri sebagai penjaga kota dan masyarakatnya. Response: {\n \"karakter\": [\n {\n \"nama\": \"Eira\",\n \"deskripsi\": \"Seorang wanita muda dengan rambut emas dan mata biru, seorang penyihir yang terampil yang dapat berkomunikasi dengan makhluk-makhluk yang tidak diketahui.\"\n },\n {\n \"nama\": \"Ksatria yang Hilang\",\n \"deskripsi\": \"Seorang penyihir jahat yang telah menggunakan kekuatannya untuk mengendalikan kota El'goroth.\"\n },\n {\n \"nama\": \"Naga yang Bijaksana\",\n \"deskripsi\": \"Seekor naga yang memberikan petunjuk kepada Eira tentang lokasi Ksatria yang Hilang.\"\n },\n {\n \"nama\": \"Penyihir Jahat\",\n \"deskripsi\": \"Sekelompok penyihir yang berusaha menghentikan pencarian Eira.\"\n }\n ]\n} ```
mradermacher/Evocation-i1-GGUF
mradermacher
2024-09-09T07:48:32Z
81
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:ClaudioItaly/Evocation", "base_model:quantized:ClaudioItaly/Evocation", "endpoints_compatible", "region:us", "imatrix" ]
null
2024-09-09T05:45:15Z
--- base_model: ClaudioItaly/Evocation language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/ClaudioItaly/Evocation <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Evocation-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Evocation-i1-GGUF/resolve/main/Evocation.i1-IQ1_S.gguf) | i1-IQ1_S | 3.0 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Evocation-i1-GGUF/resolve/main/Evocation.i1-IQ1_M.gguf) | i1-IQ1_M | 3.2 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Evocation-i1-GGUF/resolve/main/Evocation.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Evocation-i1-GGUF/resolve/main/Evocation.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Evocation-i1-GGUF/resolve/main/Evocation.i1-IQ2_S.gguf) | i1-IQ2_S | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/Evocation-i1-GGUF/resolve/main/Evocation.i1-IQ2_M.gguf) | i1-IQ2_M | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Evocation-i1-GGUF/resolve/main/Evocation.i1-Q2_K.gguf) | i1-Q2_K | 5.0 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Evocation-i1-GGUF/resolve/main/Evocation.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Evocation-i1-GGUF/resolve/main/Evocation.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Evocation-i1-GGUF/resolve/main/Evocation.i1-IQ3_S.gguf) | i1-IQ3_S | 5.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Evocation-i1-GGUF/resolve/main/Evocation.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Evocation-i1-GGUF/resolve/main/Evocation.i1-IQ3_M.gguf) | i1-IQ3_M | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/Evocation-i1-GGUF/resolve/main/Evocation.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Evocation-i1-GGUF/resolve/main/Evocation.i1-Q3_K_L.gguf) | i1-Q3_K_L | 7.0 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Evocation-i1-GGUF/resolve/main/Evocation.i1-IQ4_XS.gguf) | i1-IQ4_XS | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/Evocation-i1-GGUF/resolve/main/Evocation.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 7.5 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/Evocation-i1-GGUF/resolve/main/Evocation.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 7.5 | fast on arm+i8mm, low quality | | [GGUF](https://huggingface.co/mradermacher/Evocation-i1-GGUF/resolve/main/Evocation.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 7.5 | fast on arm+sve, low quality | | [GGUF](https://huggingface.co/mradermacher/Evocation-i1-GGUF/resolve/main/Evocation.i1-Q4_0.gguf) | i1-Q4_0 | 7.5 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Evocation-i1-GGUF/resolve/main/Evocation.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.5 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Evocation-i1-GGUF/resolve/main/Evocation.i1-Q4_K_M.gguf) | i1-Q4_K_M | 8.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Evocation-i1-GGUF/resolve/main/Evocation.i1-Q5_K_S.gguf) | i1-Q5_K_S | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/Evocation-i1-GGUF/resolve/main/Evocation.i1-Q5_K_M.gguf) | i1-Q5_K_M | 9.3 | | | [GGUF](https://huggingface.co/mradermacher/Evocation-i1-GGUF/resolve/main/Evocation.i1-Q6_K.gguf) | i1-Q6_K | 10.8 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
nyarkssss/whisper-tiny-ak
nyarkssss
2024-09-09T07:44:19Z
76
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:audiofolder", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-08-23T13:08:25Z
--- library_name: transformers license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_trainer datasets: - audiofolder metrics: - wer model-index: - name: whisper-tiny-ak results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: audiofolder type: audiofolder config: default split: train args: default metrics: - name: Wer type: wer value: 61.9195 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-tiny-ak This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the audiofolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8641 - Wer: 61.9195 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-------:|:----:|:---------------:|:-------:| | 0.1919 | 13.3333 | 1000 | 0.8641 | 61.9195 | | 0.0111 | 26.6667 | 2000 | 1.1524 | 64.9256 | | 0.0031 | 40.0 | 3000 | 1.2699 | 63.7272 | | 0.0022 | 53.3333 | 4000 | 1.3054 | 66.8831 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.3.1+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
mradermacher/MN-RocinanteCelestar-12B-GGUF
mradermacher
2024-09-09T07:31:17Z
136
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:rityak/MN-RocinanteCelestar-12B", "base_model:quantized:rityak/MN-RocinanteCelestar-12B", "endpoints_compatible", "region:us", "conversational" ]
null
2024-09-08T16:48:08Z
--- base_model: rityak/MN-RocinanteCelestar-12B language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/rityak/MN-RocinanteCelestar-12B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/MN-RocinanteCelestar-12B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MN-RocinanteCelestar-12B-GGUF/resolve/main/MN-RocinanteCelestar-12B.Q2_K.gguf) | Q2_K | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/MN-RocinanteCelestar-12B-GGUF/resolve/main/MN-RocinanteCelestar-12B.IQ3_XS.gguf) | IQ3_XS | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/MN-RocinanteCelestar-12B-GGUF/resolve/main/MN-RocinanteCelestar-12B.Q3_K_S.gguf) | Q3_K_S | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/MN-RocinanteCelestar-12B-GGUF/resolve/main/MN-RocinanteCelestar-12B.IQ3_S.gguf) | IQ3_S | 5.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/MN-RocinanteCelestar-12B-GGUF/resolve/main/MN-RocinanteCelestar-12B.IQ3_M.gguf) | IQ3_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/MN-RocinanteCelestar-12B-GGUF/resolve/main/MN-RocinanteCelestar-12B.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MN-RocinanteCelestar-12B-GGUF/resolve/main/MN-RocinanteCelestar-12B.Q3_K_L.gguf) | Q3_K_L | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/MN-RocinanteCelestar-12B-GGUF/resolve/main/MN-RocinanteCelestar-12B.IQ4_XS.gguf) | IQ4_XS | 6.9 | | | [GGUF](https://huggingface.co/mradermacher/MN-RocinanteCelestar-12B-GGUF/resolve/main/MN-RocinanteCelestar-12B.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MN-RocinanteCelestar-12B-GGUF/resolve/main/MN-RocinanteCelestar-12B.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MN-RocinanteCelestar-12B-GGUF/resolve/main/MN-RocinanteCelestar-12B.Q5_K_S.gguf) | Q5_K_S | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/MN-RocinanteCelestar-12B-GGUF/resolve/main/MN-RocinanteCelestar-12B.Q5_K_M.gguf) | Q5_K_M | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/MN-RocinanteCelestar-12B-GGUF/resolve/main/MN-RocinanteCelestar-12B.Q6_K.gguf) | Q6_K | 10.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/MN-RocinanteCelestar-12B-GGUF/resolve/main/MN-RocinanteCelestar-12B.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
allknowingroger/llama3Yi-40B
allknowingroger
2024-09-09T07:26:54Z
34
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "base_model:01-ai/Yi-1.5-34B-Chat", "base_model:finetune:01-ai/Yi-1.5-34B-Chat", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-09T07:13:14Z
--- base_model: - 01-ai/Yi-1.5-34B-Chat library_name: transformers tags: - mergekit - merge license: apache-2.0 --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [01-ai/Yi-1.5-34B-Chat](https://huggingface.co/01-ai/Yi-1.5-34B-Chat) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: 01-ai/Yi-1.5-34B-Chat layer_range: [0, 39] - sources: - model: 01-ai/Yi-1.5-34B-Chat layer_range: [8, 39] merge_method: passthrough dtype: bfloat16 ```
heisenberg3376/animal-diffusion-128
heisenberg3376
2024-09-09T07:16:21Z
50
0
diffusers
[ "diffusers", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2024-09-09T07:09:01Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('heisenberg3376/animal-diffusion-128') image = pipeline().images[0] image ```
jvelja/vllm-gemma2b_8
jvelja
2024-09-09T07:13:25Z
59
0
transformers
[ "transformers", "pytorch", "safetensors", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "endpoints_compatible", "region:us" ]
reinforcement-learning
2024-09-09T07:13:22Z
--- license: apache-2.0 library_name: transformers tags: - trl - ppo - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="jvelja//tmp/tmpt5m47scg/jvelja/vllm-gemma2b_8") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("jvelja//tmp/tmpt5m47scg/jvelja/vllm-gemma2b_8") model = AutoModelForCausalLMWithValueHead.from_pretrained("jvelja//tmp/tmpt5m47scg/jvelja/vllm-gemma2b_8") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
jvelja/BERT_vllm-gemma2b_8
jvelja
2024-09-09T07:13:22Z
106
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-09-09T07:13:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
d1ck0n/electra-small-discriminator-qa-grokking
d1ck0n
2024-09-09T07:12:54Z
162
0
transformers
[ "transformers", "safetensors", "electra", "question-answering", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
question-answering
2024-09-09T07:12:48Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TenzinGayche/byt5-mitra-bo
TenzinGayche
2024-09-09T07:10:57Z
104
0
transformers
[ "transformers", "safetensors", "t5", "feature-extraction", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-09-05T07:03:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
vishalkatheriya18/convnextv2-tiny-1k-224-finetuned-fullwear
vishalkatheriya18
2024-09-09T07:03:52Z
5
0
null
[ "safetensors", "convnextv2", "generated_from_trainer", "dataset:imagefolder", "base_model:facebook/convnextv2-tiny-1k-224", "base_model:finetune:facebook/convnextv2-tiny-1k-224", "license:apache-2.0", "model-index", "region:us" ]
null
2024-09-09T07:03:46Z
--- license: apache-2.0 base_model: facebook/convnextv2-tiny-1k-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: convnextv2-tiny-1k-224-finetuned-fullwear results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.8402777777777778 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # convnextv2-tiny-1k-224-finetuned-fullwear This model is a fine-tuned version of [facebook/convnextv2-tiny-1k-224](https://huggingface.co/facebook/convnextv2-tiny-1k-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5203 - Accuracy: 0.8403 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 120 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:--------:|:----:|:---------------:|:--------:| | 2.4871 | 0.9756 | 10 | 2.4771 | 0.0694 | | 2.4464 | 1.9512 | 20 | 2.4333 | 0.1528 | | 2.3911 | 2.9268 | 30 | 2.3670 | 0.2778 | | 2.3204 | 4.0 | 41 | 2.2617 | 0.3681 | | 2.206 | 4.9756 | 51 | 2.1445 | 0.3958 | | 2.0869 | 5.9512 | 61 | 2.0146 | 0.4444 | | 1.9756 | 6.9268 | 71 | 1.8763 | 0.5139 | | 1.8124 | 8.0 | 82 | 1.7422 | 0.5486 | | 1.6624 | 8.9756 | 92 | 1.6629 | 0.5903 | | 1.587 | 9.9512 | 102 | 1.5474 | 0.6111 | | 1.4746 | 10.9268 | 112 | 1.4577 | 0.625 | | 1.359 | 12.0 | 123 | 1.3055 | 0.6736 | | 1.2412 | 12.9756 | 133 | 1.2241 | 0.6736 | | 1.1374 | 13.9512 | 143 | 1.2003 | 0.6736 | | 1.0194 | 14.9268 | 153 | 1.0233 | 0.7569 | | 0.9705 | 16.0 | 164 | 0.9492 | 0.7847 | | 0.8949 | 16.9756 | 174 | 0.9246 | 0.75 | | 0.7959 | 17.9512 | 184 | 0.8148 | 0.7639 | | 0.7491 | 18.9268 | 194 | 0.7858 | 0.7569 | | 0.6783 | 20.0 | 205 | 0.8010 | 0.7569 | | 0.6257 | 20.9756 | 215 | 0.7295 | 0.7847 | | 0.5999 | 21.9512 | 225 | 0.6219 | 0.8333 | | 0.5701 | 22.9268 | 235 | 0.5932 | 0.8403 | | 0.4926 | 24.0 | 246 | 0.5970 | 0.8056 | | 0.4692 | 24.9756 | 256 | 0.6298 | 0.8194 | | 0.4393 | 25.9512 | 266 | 0.5857 | 0.8056 | | 0.419 | 26.9268 | 276 | 0.5203 | 0.8542 | | 0.3454 | 28.0 | 287 | 0.6084 | 0.8264 | | 0.36 | 28.9756 | 297 | 0.5928 | 0.8264 | | 0.3265 | 29.9512 | 307 | 0.5303 | 0.8403 | | 0.3278 | 30.9268 | 317 | 0.6049 | 0.8194 | | 0.2766 | 32.0 | 328 | 0.5656 | 0.8264 | | 0.2805 | 32.9756 | 338 | 0.5003 | 0.8681 | | 0.2505 | 33.9512 | 348 | 0.5412 | 0.8403 | | 0.2464 | 34.9268 | 358 | 0.5410 | 0.8333 | | 0.2166 | 36.0 | 369 | 0.5000 | 0.8472 | | 0.2 | 36.9756 | 379 | 0.5053 | 0.8056 | | 0.1914 | 37.9512 | 389 | 0.5161 | 0.8403 | | 0.186 | 38.9268 | 399 | 0.4242 | 0.8681 | | 0.1592 | 40.0 | 410 | 0.5059 | 0.8472 | | 0.1598 | 40.9756 | 420 | 0.5143 | 0.8264 | | 0.1565 | 41.9512 | 430 | 0.4703 | 0.8542 | | 0.1598 | 42.9268 | 440 | 0.4384 | 0.8542 | | 0.139 | 44.0 | 451 | 0.4850 | 0.8403 | | 0.1137 | 44.9756 | 461 | 0.4405 | 0.8542 | | 0.1158 | 45.9512 | 471 | 0.5250 | 0.8333 | | 0.1192 | 46.9268 | 481 | 0.5843 | 0.8194 | | 0.1271 | 48.0 | 492 | 0.4498 | 0.8611 | | 0.0914 | 48.9756 | 502 | 0.5167 | 0.8264 | | 0.1079 | 49.9512 | 512 | 0.4648 | 0.8681 | | 0.091 | 50.9268 | 522 | 0.5321 | 0.8194 | | 0.1053 | 52.0 | 533 | 0.4402 | 0.8611 | | 0.0842 | 52.9756 | 543 | 0.4776 | 0.8542 | | 0.0961 | 53.9512 | 553 | 0.4762 | 0.8681 | | 0.0896 | 54.9268 | 563 | 0.4477 | 0.8681 | | 0.0876 | 56.0 | 574 | 0.4951 | 0.8472 | | 0.0855 | 56.9756 | 584 | 0.5653 | 0.8125 | | 0.073 | 57.9512 | 594 | 0.5315 | 0.8472 | | 0.0804 | 58.9268 | 604 | 0.5064 | 0.8681 | | 0.0765 | 60.0 | 615 | 0.6316 | 0.8264 | | 0.0782 | 60.9756 | 625 | 0.5733 | 0.8056 | | 0.069 | 61.9512 | 635 | 0.6994 | 0.8056 | | 0.0809 | 62.9268 | 645 | 0.4898 | 0.8611 | | 0.0829 | 64.0 | 656 | 0.6042 | 0.8194 | | 0.0735 | 64.9756 | 666 | 0.4758 | 0.8611 | | 0.0763 | 65.9512 | 676 | 0.4921 | 0.8542 | | 0.0565 | 66.9268 | 686 | 0.4700 | 0.8681 | | 0.062 | 68.0 | 697 | 0.4944 | 0.8819 | | 0.0644 | 68.9756 | 707 | 0.4733 | 0.8681 | | 0.0659 | 69.9512 | 717 | 0.4703 | 0.8819 | | 0.0625 | 70.9268 | 727 | 0.5075 | 0.8542 | | 0.042 | 72.0 | 738 | 0.5464 | 0.8264 | | 0.056 | 72.9756 | 748 | 0.5186 | 0.8333 | | 0.0858 | 73.9512 | 758 | 0.5403 | 0.8264 | | 0.0616 | 74.9268 | 768 | 0.5104 | 0.8472 | | 0.0777 | 76.0 | 779 | 0.5516 | 0.8403 | | 0.0668 | 76.9756 | 789 | 0.4918 | 0.8611 | | 0.0585 | 77.9512 | 799 | 0.5692 | 0.8403 | | 0.0562 | 78.9268 | 809 | 0.5734 | 0.8403 | | 0.0653 | 80.0 | 820 | 0.5403 | 0.8264 | | 0.0434 | 80.9756 | 830 | 0.5108 | 0.8333 | | 0.0483 | 81.9512 | 840 | 0.5699 | 0.8125 | | 0.0329 | 82.9268 | 850 | 0.6028 | 0.8056 | | 0.0431 | 84.0 | 861 | 0.5230 | 0.8333 | | 0.042 | 84.9756 | 871 | 0.5875 | 0.8194 | | 0.0449 | 85.9512 | 881 | 0.5180 | 0.8611 | | 0.0512 | 86.9268 | 891 | 0.5425 | 0.8194 | | 0.0545 | 88.0 | 902 | 0.5690 | 0.8264 | | 0.0496 | 88.9756 | 912 | 0.5619 | 0.8611 | | 0.0449 | 89.9512 | 922 | 0.5626 | 0.8333 | | 0.0405 | 90.9268 | 932 | 0.5267 | 0.8403 | | 0.0344 | 92.0 | 943 | 0.5617 | 0.8403 | | 0.0421 | 92.9756 | 953 | 0.5400 | 0.8611 | | 0.0341 | 93.9512 | 963 | 0.5729 | 0.8333 | | 0.0492 | 94.9268 | 973 | 0.5855 | 0.8056 | | 0.0374 | 96.0 | 984 | 0.6113 | 0.8125 | | 0.0375 | 96.9756 | 994 | 0.5511 | 0.8403 | | 0.0373 | 97.9512 | 1004 | 0.4942 | 0.8542 | | 0.0447 | 98.9268 | 1014 | 0.5031 | 0.8542 | | 0.0519 | 100.0 | 1025 | 0.5349 | 0.8542 | | 0.0387 | 100.9756 | 1035 | 0.5511 | 0.8542 | | 0.0256 | 101.9512 | 1045 | 0.5319 | 0.8403 | | 0.043 | 102.9268 | 1055 | 0.5605 | 0.8264 | | 0.029 | 104.0 | 1066 | 0.5776 | 0.8403 | | 0.0379 | 104.9756 | 1076 | 0.5697 | 0.8472 | | 0.0445 | 105.9512 | 1086 | 0.5133 | 0.8681 | | 0.0267 | 106.9268 | 1096 | 0.5076 | 0.8681 | | 0.044 | 108.0 | 1107 | 0.5260 | 0.8403 | | 0.0263 | 108.9756 | 1117 | 0.5101 | 0.8542 | | 0.0247 | 109.9512 | 1127 | 0.4972 | 0.8542 | | 0.0441 | 110.9268 | 1137 | 0.5094 | 0.8472 | | 0.0263 | 112.0 | 1148 | 0.5259 | 0.8333 | | 0.0247 | 112.9756 | 1158 | 0.5323 | 0.8403 | | 0.0356 | 113.9512 | 1168 | 0.5275 | 0.8403 | | 0.0297 | 114.9268 | 1178 | 0.5240 | 0.8333 | | 0.044 | 116.0 | 1189 | 0.5201 | 0.8472 | | 0.031 | 116.9756 | 1199 | 0.5203 | 0.8403 | | 0.0369 | 117.0732 | 1200 | 0.5203 | 0.8403 | ### Framework versions - Transformers 4.44.0 - Pytorch 2.4.0 - Datasets 2.21.0 - Tokenizers 0.19.1
mradermacher/L3.1-HermesBreakGradientXL-12B-GGUF
mradermacher
2024-09-09T07:02:41Z
55
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:rityak/L3.1-HermesBreakGradientXL-12B", "base_model:quantized:rityak/L3.1-HermesBreakGradientXL-12B", "endpoints_compatible", "region:us", "conversational" ]
null
2024-09-08T16:50:29Z
--- base_model: rityak/L3.1-HermesBreakGradientXL-12B language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/rityak/L3.1-HermesBreakGradientXL-12B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/L3.1-HermesBreakGradientXL-12B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/L3.1-HermesBreakGradientXL-12B-GGUF/resolve/main/L3.1-HermesBreakGradientXL-12B.Q2_K.gguf) | Q2_K | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/L3.1-HermesBreakGradientXL-12B-GGUF/resolve/main/L3.1-HermesBreakGradientXL-12B.IQ3_XS.gguf) | IQ3_XS | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/L3.1-HermesBreakGradientXL-12B-GGUF/resolve/main/L3.1-HermesBreakGradientXL-12B.Q3_K_S.gguf) | Q3_K_S | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/L3.1-HermesBreakGradientXL-12B-GGUF/resolve/main/L3.1-HermesBreakGradientXL-12B.IQ3_S.gguf) | IQ3_S | 5.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/L3.1-HermesBreakGradientXL-12B-GGUF/resolve/main/L3.1-HermesBreakGradientXL-12B.IQ3_M.gguf) | IQ3_M | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/L3.1-HermesBreakGradientXL-12B-GGUF/resolve/main/L3.1-HermesBreakGradientXL-12B.Q3_K_M.gguf) | Q3_K_M | 6.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/L3.1-HermesBreakGradientXL-12B-GGUF/resolve/main/L3.1-HermesBreakGradientXL-12B.Q3_K_L.gguf) | Q3_K_L | 6.6 | | | [GGUF](https://huggingface.co/mradermacher/L3.1-HermesBreakGradientXL-12B-GGUF/resolve/main/L3.1-HermesBreakGradientXL-12B.IQ4_XS.gguf) | IQ4_XS | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/L3.1-HermesBreakGradientXL-12B-GGUF/resolve/main/L3.1-HermesBreakGradientXL-12B.Q4_K_S.gguf) | Q4_K_S | 7.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3.1-HermesBreakGradientXL-12B-GGUF/resolve/main/L3.1-HermesBreakGradientXL-12B.Q4_K_M.gguf) | Q4_K_M | 7.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3.1-HermesBreakGradientXL-12B-GGUF/resolve/main/L3.1-HermesBreakGradientXL-12B.Q5_K_S.gguf) | Q5_K_S | 8.5 | | | [GGUF](https://huggingface.co/mradermacher/L3.1-HermesBreakGradientXL-12B-GGUF/resolve/main/L3.1-HermesBreakGradientXL-12B.Q5_K_M.gguf) | Q5_K_M | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/L3.1-HermesBreakGradientXL-12B-GGUF/resolve/main/L3.1-HermesBreakGradientXL-12B.Q6_K.gguf) | Q6_K | 10.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/L3.1-HermesBreakGradientXL-12B-GGUF/resolve/main/L3.1-HermesBreakGradientXL-12B.Q8_0.gguf) | Q8_0 | 13.0 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
QuantFactory/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-GGUF
QuantFactory
2024-09-09T06:58:29Z
66
2
null
[ "gguf", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-09-09T05:44:56Z
--- license: apache-2.0 --- ![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ) # QuantFactory/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-GGUF This is quantized version of [ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1](https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1) created using llama.cpp # Original Model Card # ArliAI-RPMax-12B-v1.1 ===================================== ## Overview This repository is based on the Mistral-Nemo-Base-2407 model and is governed by the Apache 2.0 License agreement: https://huggingface.co/mistralai/Mistral-Nemo-Base-2407 ## Model Description ArliAI-RPMax-12B-v1.1 is trained on a diverse set of curated RP datasets with a focus on variety and deduplication. This model is designed to be highly creative and non-repetitive, with a unique approach to training that minimizes repetition. You can access the model at https://arliai.com and ask questions at https://www.reddit.com/r/ArliAI/ ### Training Details * **Sequence Length**: 8192 * **Training Duration**: Approximately 2 days on 2x3090Ti * **Epochs**: 1 epoch training for minimized repetition sickness * **QLORA**: 64-rank 128-alpha, resulting in ~2% trainable weights * **Learning Rate**: 0.00001 * **Gradient accumulation**: Very low 32 for better learning. ## Quantization The model is available in quantized formats: * **FP16**: https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1 * **GPTQ_Q4**: https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-GPTQ_Q4 * **GPTQ_Q8**: https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-GPTQ_Q8 * **GGUF**: https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-GGUF ## Suggested Prompt Format Mistral Instruct Prompt Format
weihao1115/ucmt_sam_on_mslidar
weihao1115
2024-09-09T06:51:20Z
6
0
null
[ "safetensors", "vit_b", "model_hub_mixin", "pytorch_model_hub_mixin", "mask-generation", "arxiv:2408.09085", "license:mit", "region:us" ]
mask-generation
2024-09-01T14:35:12Z
--- license: mit pipeline_tag: mask-generation tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Github: https://github.com/weihao1115/mm-sam - Paper: https://huggingface.co/papers/2408.09085
weihao1115/ucmt_sam_on_hsi
weihao1115
2024-09-09T06:48:14Z
6
0
null
[ "safetensors", "vit_b", "model_hub_mixin", "pytorch_model_hub_mixin", "mask-generation", "arxiv:2408.09085", "license:mit", "region:us" ]
mask-generation
2024-09-01T14:35:29Z
--- license: mit pipeline_tag: mask-generation tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Github: https://github.com/weihao1115/mm-sam - Paper: https://huggingface.co/papers/2408.09085
jvelja/vllm-gemma2b-deterministic_7
jvelja
2024-09-09T06:42:30Z
62
0
transformers
[ "transformers", "pytorch", "safetensors", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "endpoints_compatible", "region:us" ]
reinforcement-learning
2024-09-09T06:42:27Z
--- license: apache-2.0 library_name: transformers tags: - trl - ppo - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="jvelja//tmp/tmpu1mdwkak/jvelja/vllm-gemma2b-deterministic_7") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("jvelja//tmp/tmpu1mdwkak/jvelja/vllm-gemma2b-deterministic_7") model = AutoModelForCausalLMWithValueHead.from_pretrained("jvelja//tmp/tmpu1mdwkak/jvelja/vllm-gemma2b-deterministic_7") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
simonycl/llama-3.1-8b-instruct-armorm-iter0
simonycl
2024-09-09T06:37:36Z
10
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "alignment-handbook", "generated_from_trainer", "conversational", "dataset:simonycl/Meta-Llama-3.1-8B-Instruct_ultrafeedback_iter_0_rm_annotate", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:finetune:meta-llama/Llama-3.1-8B-Instruct", "license:llama3.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-09T06:35:26Z
--- library_name: transformers license: llama3.1 base_model: meta-llama/Meta-Llama-3.1-8B-Instruct tags: - alignment-handbook - generated_from_trainer datasets: - simonycl/Meta-Llama-3.1-8B-Instruct_ultrafeedback_iter_0_rm_annotate model-index: - name: llama-3.1-8b-instruct-armorm-iter0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama-3.1-8b-instruct-armorm-iter0 This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on the simonycl/Meta-Llama-3.1-8B-Instruct_ultrafeedback_iter_0_rm_annotate dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 1 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
RichardErkhov/nnethercott_-_llava-v1.5-7b_vicuna-gguf
RichardErkhov
2024-09-09T06:34:14Z
12
0
null
[ "gguf", "endpoints_compatible", "region:us" ]
null
2024-09-09T01:43:15Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) llava-v1.5-7b_vicuna - GGUF - Model creator: https://huggingface.co/nnethercott/ - Original model: https://huggingface.co/nnethercott/llava-v1.5-7b_vicuna/ | Name | Quant method | Size | | ---- | ---- | ---- | | [llava-v1.5-7b_vicuna.Q2_K.gguf](https://huggingface.co/RichardErkhov/nnethercott_-_llava-v1.5-7b_vicuna-gguf/blob/main/llava-v1.5-7b_vicuna.Q2_K.gguf) | Q2_K | 2.36GB | | [llava-v1.5-7b_vicuna.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/nnethercott_-_llava-v1.5-7b_vicuna-gguf/blob/main/llava-v1.5-7b_vicuna.IQ3_XS.gguf) | IQ3_XS | 2.6GB | | [llava-v1.5-7b_vicuna.IQ3_S.gguf](https://huggingface.co/RichardErkhov/nnethercott_-_llava-v1.5-7b_vicuna-gguf/blob/main/llava-v1.5-7b_vicuna.IQ3_S.gguf) | IQ3_S | 2.75GB | | [llava-v1.5-7b_vicuna.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/nnethercott_-_llava-v1.5-7b_vicuna-gguf/blob/main/llava-v1.5-7b_vicuna.Q3_K_S.gguf) | Q3_K_S | 2.75GB | | [llava-v1.5-7b_vicuna.IQ3_M.gguf](https://huggingface.co/RichardErkhov/nnethercott_-_llava-v1.5-7b_vicuna-gguf/blob/main/llava-v1.5-7b_vicuna.IQ3_M.gguf) | IQ3_M | 2.9GB | | [llava-v1.5-7b_vicuna.Q3_K.gguf](https://huggingface.co/RichardErkhov/nnethercott_-_llava-v1.5-7b_vicuna-gguf/blob/main/llava-v1.5-7b_vicuna.Q3_K.gguf) | Q3_K | 3.07GB | | [llava-v1.5-7b_vicuna.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/nnethercott_-_llava-v1.5-7b_vicuna-gguf/blob/main/llava-v1.5-7b_vicuna.Q3_K_M.gguf) | Q3_K_M | 3.07GB | | [llava-v1.5-7b_vicuna.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/nnethercott_-_llava-v1.5-7b_vicuna-gguf/blob/main/llava-v1.5-7b_vicuna.Q3_K_L.gguf) | Q3_K_L | 3.35GB | | [llava-v1.5-7b_vicuna.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/nnethercott_-_llava-v1.5-7b_vicuna-gguf/blob/main/llava-v1.5-7b_vicuna.IQ4_XS.gguf) | IQ4_XS | 3.4GB | | [llava-v1.5-7b_vicuna.Q4_0.gguf](https://huggingface.co/RichardErkhov/nnethercott_-_llava-v1.5-7b_vicuna-gguf/blob/main/llava-v1.5-7b_vicuna.Q4_0.gguf) | Q4_0 | 3.56GB | | [llava-v1.5-7b_vicuna.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/nnethercott_-_llava-v1.5-7b_vicuna-gguf/blob/main/llava-v1.5-7b_vicuna.IQ4_NL.gguf) | IQ4_NL | 3.58GB | | [llava-v1.5-7b_vicuna.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/nnethercott_-_llava-v1.5-7b_vicuna-gguf/blob/main/llava-v1.5-7b_vicuna.Q4_K_S.gguf) | Q4_K_S | 3.59GB | | [llava-v1.5-7b_vicuna.Q4_K.gguf](https://huggingface.co/RichardErkhov/nnethercott_-_llava-v1.5-7b_vicuna-gguf/blob/main/llava-v1.5-7b_vicuna.Q4_K.gguf) | Q4_K | 3.8GB | | [llava-v1.5-7b_vicuna.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/nnethercott_-_llava-v1.5-7b_vicuna-gguf/blob/main/llava-v1.5-7b_vicuna.Q4_K_M.gguf) | Q4_K_M | 3.8GB | | [llava-v1.5-7b_vicuna.Q4_1.gguf](https://huggingface.co/RichardErkhov/nnethercott_-_llava-v1.5-7b_vicuna-gguf/blob/main/llava-v1.5-7b_vicuna.Q4_1.gguf) | Q4_1 | 3.95GB | | [llava-v1.5-7b_vicuna.Q5_0.gguf](https://huggingface.co/RichardErkhov/nnethercott_-_llava-v1.5-7b_vicuna-gguf/blob/main/llava-v1.5-7b_vicuna.Q5_0.gguf) | Q5_0 | 4.33GB | | [llava-v1.5-7b_vicuna.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/nnethercott_-_llava-v1.5-7b_vicuna-gguf/blob/main/llava-v1.5-7b_vicuna.Q5_K_S.gguf) | Q5_K_S | 4.33GB | | [llava-v1.5-7b_vicuna.Q5_K.gguf](https://huggingface.co/RichardErkhov/nnethercott_-_llava-v1.5-7b_vicuna-gguf/blob/main/llava-v1.5-7b_vicuna.Q5_K.gguf) | Q5_K | 4.45GB | | [llava-v1.5-7b_vicuna.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/nnethercott_-_llava-v1.5-7b_vicuna-gguf/blob/main/llava-v1.5-7b_vicuna.Q5_K_M.gguf) | Q5_K_M | 4.45GB | | [llava-v1.5-7b_vicuna.Q5_1.gguf](https://huggingface.co/RichardErkhov/nnethercott_-_llava-v1.5-7b_vicuna-gguf/blob/main/llava-v1.5-7b_vicuna.Q5_1.gguf) | Q5_1 | 4.72GB | | [llava-v1.5-7b_vicuna.Q6_K.gguf](https://huggingface.co/RichardErkhov/nnethercott_-_llava-v1.5-7b_vicuna-gguf/blob/main/llava-v1.5-7b_vicuna.Q6_K.gguf) | Q6_K | 5.15GB | | [llava-v1.5-7b_vicuna.Q8_0.gguf](https://huggingface.co/RichardErkhov/nnethercott_-_llava-v1.5-7b_vicuna-gguf/blob/main/llava-v1.5-7b_vicuna.Q8_0.gguf) | Q8_0 | 6.67GB | Original model description: --- license: llama2 model-index: - name: llava-v1.5-7b_vicuna results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 52.65 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nnethercott/llava-v1.5-7b_vicuna name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 76.09 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nnethercott/llava-v1.5-7b_vicuna name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 51.68 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nnethercott/llava-v1.5-7b_vicuna name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 45.86 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nnethercott/llava-v1.5-7b_vicuna name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 72.06 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nnethercott/llava-v1.5-7b_vicuna name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 15.31 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nnethercott/llava-v1.5-7b_vicuna name: Open LLM Leaderboard --- ## Model details **Motivation** This models contains the fine-tuned weights from `liuhaotian/llava-v1.5-7b` so LLM benchmarking can be done. **Model type:** LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture. ## License Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved. ## Training dataset - 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP. - 158K GPT-generated multimodal instruction-following data. - 450K academic-task-oriented VQA data mixture. - 40K ShareGPT data. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nnethercott__llava-v1.5-7b_vicuna) | Metric |Value| |---------------------------------|----:| |Avg. |52.28| |AI2 Reasoning Challenge (25-Shot)|52.65| |HellaSwag (10-Shot) |76.09| |MMLU (5-Shot) |51.68| |TruthfulQA (0-shot) |45.86| |Winogrande (5-shot) |72.06| |GSM8k (5-shot) |15.31|
FriendliAI/Meta-Llama-3.1-70B-Instruct-int8
FriendliAI
2024-09-09T06:33:48Z
13
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "facebook", "meta", "pytorch", "llama-3", "conversational", "en", "de", "fr", "it", "pt", "hi", "es", "th", "arxiv:2204.05149", "base_model:meta-llama/Llama-3.1-70B", "base_model:quantized:meta-llama/Llama-3.1-70B", "license:llama3.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "int8", "region:us" ]
text-generation
2024-09-09T00:53:19Z
--- language: - en - de - fr - it - pt - hi - es - th library_name: transformers base_model: meta-llama/Meta-Llama-3.1-70B license: llama3.1 pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 extra_gated_prompt: "### LLAMA 3.1 COMMUNITY LICENSE AGREEMENT\nLlama 3.1 Version\ \ Release Date: July 23, 2024\n\"Agreement\" means the terms and conditions for\ \ use, reproduction, distribution and modification of the Llama Materials set forth\ \ herein.\n\"Documentation\" means the specifications, manuals and documentation\ \ accompanying Llama 3.1 distributed by Meta at https://llama.meta.com/doc/overview.\n\ \"Licensee\" or \"you\" means you, or your employer or any other person or entity\ \ (if you are entering into this Agreement on such person or entity’s behalf), of\ \ the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf.\n\"Llama 3.1\"\ \ means the foundational large language models and software and algorithms, including\ \ machine-learning model code, trained model weights, inference-enabling code, training-enabling\ \ code, fine-tuning enabling code and other elements of the foregoing distributed\ \ by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means,\ \ collectively, Meta’s proprietary Llama 3.1 and Documentation (and any portion\ \ thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms\ \ Ireland Limited (if you are located in or, if you are an entity, your principal\ \ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you\ \ are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\n\ a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\ \ and royalty-free limited license under Meta’s intellectual property or other rights\ \ owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy,\ \ create derivative works of, and make modifications to the Llama Materials.\nb.\ \ Redistribution and Use.\ni. If you distribute or make available the Llama Materials\ \ (or any derivative works thereof), or a product or service (including another\ \ AI model) that contains any of them, you shall (A) provide a copy of this Agreement\ \ with any such Llama Materials; and (B) prominently display “Built with Llama”\ \ on a related website, user interface, blogpost, about page, or product documentation.\ \ If you use the Llama Materials or any outputs or results of the Llama Materials\ \ to create, train, fine tune, or otherwise improve an AI model, which is distributed\ \ or made available, you shall also include “Llama” at the beginning of any such\ \ AI model name.\nii. If you receive Llama Materials, or any derivative works thereof,\ \ from a Licensee as part of an integrated end user product, then Section 2 of\ \ this Agreement will not apply to you.\niii. You must retain in all copies of the\ \ Llama Materials that you distribute the following attribution notice within a\ \ “Notice” text file distributed as a part of such copies: “Llama 3.1 is licensed\ \ under the Llama 3.1 Community License, Copyright © Meta Platforms, Inc. All Rights\ \ Reserved.”\niv. Your use of the Llama Materials must comply with applicable laws\ \ and regulations (including trade compliance laws and regulations) and adhere to\ \ the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3_1/use-policy),\ \ which is hereby incorporated by reference into this Agreement.\n2. Additional\ \ Commercial Terms. If, on the Llama 3.1 version release date, the monthly active\ \ users of the products or services made available by or for Licensee, or Licensee’s\ \ affiliates, is greater than 700 million monthly active users in the preceding\ \ calendar month, you must request a license from Meta, which Meta may grant to\ \ you in its sole discretion, and you are not authorized to exercise any of the\ \ rights under this Agreement unless or until Meta otherwise expressly grants you\ \ such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE\ \ LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS”\ \ BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY\ \ KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES\ \ OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.\ \ YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING\ \ THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA\ \ MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT\ \ WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN\ \ CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS\ \ AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL,\ \ EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED\ \ OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No\ \ trademark licenses are granted under this Agreement, and in connection with the\ \ Llama Materials, neither Meta nor Licensee may use any name or mark owned by or\ \ associated with the other or any of its affiliates, except as required for reasonable\ \ and customary use in describing and redistributing the Llama Materials or as set\ \ forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the\ \ “Mark”) solely as required to comply with the last sentence of Section 1.b.i.\ \ You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/\ \ ). All goodwill arising out of your use of the Mark will inure to the benefit\ \ of Meta.\nb. Subject to Meta’s ownership of Llama Materials and derivatives made\ \ by or for Meta, with respect to any derivative works and modifications of the\ \ Llama Materials that are made by you, as between you and Meta, you are and will\ \ be the owner of such derivative works and modifications.\nc. If you institute\ \ litigation or other proceedings against Meta or any entity (including a cross-claim\ \ or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.1 outputs\ \ or results, or any portion of any of the foregoing, constitutes infringement of\ \ intellectual property or other rights owned or licensable by you, then any licenses\ \ granted to you under this Agreement shall terminate as of the date such litigation\ \ or claim is filed or instituted. You will indemnify and hold harmless Meta from\ \ and against any claim by any third party arising out of or related to your use\ \ or distribution of the Llama Materials.\n6. Term and Termination. The term of\ \ this Agreement will commence upon your acceptance of this Agreement or access\ \ to the Llama Materials and will continue in full force and effect until terminated\ \ in accordance with the terms and conditions herein. Meta may terminate this Agreement\ \ if you are in breach of any term or condition of this Agreement. Upon termination\ \ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\ \ 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law\ \ and Jurisdiction. This Agreement will be governed and construed under the laws\ \ of the State of California without regard to choice of law principles, and the\ \ UN Convention on Contracts for the International Sale of Goods does not apply\ \ to this Agreement. The courts of California shall have exclusive jurisdiction\ \ of any dispute arising out of this Agreement.\n### Llama 3.1 Acceptable Use Policy\n\ Meta is committed to promoting safe and fair use of its tools and features, including\ \ Llama 3.1. If you access or use Llama 3.1, you agree to this Acceptable Use Policy\ \ (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3_1/use-policy](https://llama.meta.com/llama3_1/use-policy)\n\ #### Prohibited Uses\nWe want everyone to use Llama 3.1 safely and responsibly.\ \ You agree you will not use, or allow others to use, Llama 3.1 to:\n 1. Violate\ \ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\ \ contribute to, encourage, plan, incite, or further illegal or unlawful activity\ \ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\ \ or harm to children, including the solicitation, creation, acquisition, or dissemination\ \ of child exploitative content or failure to report Child Sexual Abuse Material\n\ \ 3. Human trafficking, exploitation, and sexual violence\n 4. The\ \ illegal distribution of information or materials to minors, including obscene\ \ materials, or failure to employ legally required age-gating in connection with\ \ such information or materials.\n 5. Sexual solicitation\n 6. Any\ \ other criminal activity\n 3. Engage in, promote, incite, or facilitate the\ \ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\ \ 4. Engage in, promote, incite, or facilitate discrimination or other unlawful\ \ or harmful conduct in the provision of employment, employment benefits, credit,\ \ housing, other economic benefits, or other essential goods and services\n 5.\ \ Engage in the unauthorized or unlicensed practice of any profession including,\ \ but not limited to, financial, legal, medical/health, or related professional\ \ practices\n 6. Collect, process, disclose, generate, or infer health, demographic,\ \ or other sensitive personal or private information about individuals without rights\ \ and consents required by applicable laws\n 7. Engage in or facilitate any action\ \ or generate any content that infringes, misappropriates, or otherwise violates\ \ any third-party rights, including the outputs or results of any products or services\ \ using the Llama Materials\n 8. Create, generate, or facilitate the creation\ \ of malicious code, malware, computer viruses or do anything else that could disable,\ \ overburden, interfere with or impair the proper working, integrity, operation\ \ or appearance of a website or computer system\n2. Engage in, promote, incite,\ \ facilitate, or assist in the planning or development of activities that present\ \ a risk of death or bodily harm to individuals, including use of Llama 3.1 related\ \ to the following:\n 1. Military, warfare, nuclear industries or applications,\ \ espionage, use for materials or activities that are subject to the International\ \ Traffic Arms Regulations (ITAR) maintained by the United States Department of\ \ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\ \ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\ \ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\ \ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\ \ content intended to incite or promote violence, abuse, or any infliction of bodily\ \ harm to an individual\n3. Intentionally deceive or mislead others, including use\ \ of Llama 3.1 related to the following:\n 1. Generating, promoting, or furthering\ \ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\ \ or furthering defamatory content, including the creation of defamatory statements,\ \ images, or other content\n 3. Generating, promoting, or further distributing\ \ spam\n 4. Impersonating another individual without consent, authorization,\ \ or legal right\n 5. Representing that the use of Llama 3.1 or outputs are human-generated\n\ \ 6. Generating or facilitating false online engagement, including fake reviews\ \ and other means of fake online engagement\n4. Fail to appropriately disclose to\ \ end users any known dangers of your AI system\nPlease report any violation of\ \ this Policy, software “bug,” or other problems that could lead to a violation\ \ of this Policy through one of the following means:\n * Reporting issues with\ \ the model: [https://github.com/meta-llama/llama-models/issues](https://github.com/meta-llama/llama-models/issues)\n\ \ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n\ \ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting\ \ violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]" extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- <!-- header start --> <p align="center"> <img src="https://i.imgur.com/mNM6Cai.png" width="100%" alt="Friendli Logo"> </p> <!-- header end --> # Llama 3.1 70B Instruct - INT8 - Model creator: [Meta Llama 3](https://huggingface.co/meta-llama) - Original model: [Llama 3.1 70B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct) ## Description This repo contains the Llama 3 70B Instruct model quantized to INT8 by FriendliAI, significantly enhancing its inference efficiency while maintaining high accuracy. Check out [FriendliAI documentation](https://docs.friendli.ai/) for more details. ## License Refer to the license of the original model card. ## Compatibility This model is compatible with **[Friendli Container](https://friendli.ai/products/container/)**. ## Prerequisites - Before you begin, make sure you have signed up for [Friendli Suite](https://suite.friendli.ai/). **You can use Friendli Containers free of charge for four weeks.** - Prepare a Personal Access Token following [this guide](#preparing-personal-access-token). - Prepare a Friendli Container Secret following [this guide](#preparing-container-secret). ### Preparing Personal Access Token PAT (Personal Access Token) is the user credential for for logging into our container registry. 1. Sign in [Friendli Suite](https://suite.friendli.ai/). 2. Go to **[User Settings > Tokens](https://suite.friendli.ai/user-settings/tokens)** and click **'Create new token'**. 3. Save your created token value. ### Preparing Container Secret Container secret is a credential to launch our Friendli Container images. You should pass the container secret as an environment variable to run the container image. 1. Sign in [Friendli Suite](https://suite.friendli.ai/). 2. Go to **Container > Container Secrets** and click **'Create secret'**. 3. Save your created secret value. ### Pulling Friendli Container Image 1. Log in to the Docker client using the personal access token created as outlined in [this guide](#preparing-personal-access-token). ```sh export FRIENDLI_PAT="YOUR PAT" docker login registry.friendli.ai -u $YOUR_EMAIL -p $FRIENDLI_PAT ``` 2. Pull image ```sh docker pull registry.friendli.ai/trial ``` ## Running Friendli Container Once you've prepared the image of Friendli Container, you can launch it to create a serving endpoint. ```sh docker run \ --gpus '"device=0,1"' \ -p 8000:8000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ -e FRIENDLI_CONTAINER_SECRET="YOUR CONTAINER SECRET" \ registry.friendli.ai/trial \ --web-server-port 8000 \ --hf-model-name FriendliAI/Meta-Llama-3.1-70B-Instruct-int8 \ --num-devices 2 # Use tensor parallelism degree 2 ``` --- ## Model Information The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks. **Model developer**: Meta **Model Architecture:** Llama 3.1 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Input modalities</strong> </td> <td><strong>Output modalities</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="3" >Llama 3.1 (text only) </td> <td rowspan="3" >A new mix of publicly available online data. </td> <td>8B </td> <td>Multilingual Text </td> <td>Multilingual Text and code </td> <td>128k </td> <td>Yes </td> <td rowspan="3" >15T+ </td> <td rowspan="3" >December 2023 </td> </tr> <tr> <td>70B </td> <td>Multilingual Text </td> <td>Multilingual Text and code </td> <td>128k </td> <td>Yes </td> </tr> <tr> <td>405B </td> <td>Multilingual Text </td> <td>Multilingual Text and code </td> <td>128k </td> <td>Yes </td> </tr> </table> **Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai. **Llama 3.1 family of models**. Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** July 23, 2024. **Status:** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License:** A custom commercial license, the Llama 3.1 Community License, is available at: [https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3.1 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. The Llama 3.1 model collection also supports the ability to leverage the outputs of its models to improve other models including synthetic data generation and distillation. The Llama 3.1 Community License allows for these use cases. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.1 Community License. Use in languages beyond those explicitly referenced as supported in this model card**. **<span style="text-decoration:underline;">Note</span>: Llama 3.1 has been trained on a broader collection of languages than the 8 supported languages. Developers may fine-tune Llama 3.1 models for languages beyond the 8 supported languages provided they comply with the Llama 3.1 Community License and the Acceptable Use Policy and in such cases are responsible for ensuring that any uses of Llama 3.1 in additional languages is done in a safe and responsible manner. ## How to use This repository contains two versions of Meta-Llama-3.1-70B-Instruct, for use with transformers and with the original `llama` codebase. ### Use with transformers Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function. Make sure to update your transformers installation via `pip install --upgrade transformers`. See the snippet below for usage with Transformers: ```python import transformers import torch model_id = "meta-llama/Meta-Llama-3.1-70B-Instruct" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] outputs = pipeline( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ``` ### Tool use with transformers LLaMA-3.1 supports multiple tool use formats. You can see a full guide to prompt formatting [here](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_1/). Tool use is also supported through [chat templates](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling) in Transformers. Here is a quick example showing a single simple tool: ```python # First, define a tool def get_current_temperature(location: str) -> float: """ Get the current temperature at a location. Args: location: The location to get the temperature for, in the format "City, Country" Returns: The current temperature at the specified location in the specified units, as a float. """ return 22. # A real function should probably actually get the temperature! # Next, create a chat and apply the chat template messages = [ {"role": "system", "content": "You are a bot that responds to weather queries."}, {"role": "user", "content": "Hey, what's the temperature in Paris right now?"} ] inputs = tokenizer.apply_chat_template(messages, tools=[get_current_temperature], add_generation_prompt=True) ``` You can then generate text from this input as normal. If the model generates a tool call, you should add it to the chat like so: ```python tool_call = {"name": "get_current_temperature", "arguments": {"location": "Paris, France"}} messages.append({"role": "assistant", "tool_calls": [{"type": "function", "function": tool_call}]}) ``` and then call the tool and append the result, with the `tool` role, like so: ```python messages.append({"role": "tool", "name": "get_current_temperature", "content": "22.0"}) ``` After that, you can `generate()` again to let the model use the tool result in the chat. Note that this was a very brief introduction to tool calling - for more information, see the [LLaMA prompt format docs](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_1/) and the Transformers [tool use documentation](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling). ### Use with `bitsandbytes` The model checkpoints can be used in `8-bit` and `4-bit` for further memory optimisations using `bitsandbytes` and `transformers` See the snippet below for usage: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "meta-llama/Meta-Llama-3.1-70B-Instruct" quantization_config = BitsAndBytesConfig(load_in_8bit=True) quantized_model = AutoModelForCausalLM.from_pretrained( model_id, device_map="auto", torch_dtype=torch.bfloat16, quantization_config=quantization_config) tokenizer = AutoTokenizer.from_pretrained(model_id) input_text = "What are we having for dinner?" input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") output = quantized_model.generate(**input_ids, max_new_tokens=10) print(tokenizer.decode(output[0], skip_special_tokens=True)) ``` To load in 4-bit simply pass `load_in_4bit=True` ### Use with `llama` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama). To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3.1-70B-Instruct --include "original/*" --local-dir Meta-Llama-3.1-70B-Instruct ``` ## Hardware and Software **Training Factors** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure. **Training utilized a cumulative of** 39.3M GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency. **Training Greenhouse Gas Emissions** Estimated total location-based greenhouse gas emissions were **11,390** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy, therefore the total market-based greenhouse gas emissions for training were 0 tons CO2eq. <table> <tr> <td> </td> <td><strong>Training Time (GPU hours)</strong> </td> <td><strong>Training Power Consumption (W)</strong> </td> <td><strong>Training Location-Based Greenhouse Gas Emissions</strong> <p> <strong>(tons CO2eq)</strong> </td> <td><strong>Training Market-Based Greenhouse Gas Emissions</strong> <p> <strong>(tons CO2eq)</strong> </td> </tr> <tr> <td>Llama 3.1 8B </td> <td>1.46M </td> <td>700 </td> <td>420 </td> <td>0 </td> </tr> <tr> <td>Llama 3.1 70B </td> <td>7.0M </td> <td>700 </td> <td>2,040 </td> <td>0 </td> </tr> <tr> <td>Llama 3.1 405B </td> <td>30.84M </td> <td>700 </td> <td>8,930 </td> <td>0 </td> </tr> <tr> <td>Total </td> <td>39.3M <td> <ul> </ul> </td> <td>11,390 </td> <td>0 </td> </tr> </table> The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others. ## Training Data **Overview:** Llama 3.1 was pretrained on ~15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 25M synthetically generated examples. **Data Freshness:** The pretraining data has a cutoff of December 2023. ## Benchmark scores In this section, we report the results for Llama 3.1 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong># Shots</strong> </td> <td><strong>Metric</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 3.1 8B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 3.1 70B</strong> </td> <td><strong>Llama 3.1 405B</strong> </td> </tr> <tr> <td rowspan="7" >General </td> <td>MMLU </td> <td>5 </td> <td>macro_avg/acc_char </td> <td>66.7 </td> <td>66.7 </td> <td>79.5 </td> <td>79.3 </td> <td>85.2 </td> </tr> <tr> <td>MMLU-Pro (CoT) </td> <td>5 </td> <td>macro_avg/acc_char </td> <td>36.2 </td> <td>37.1 </td> <td>55.0 </td> <td>53.8 </td> <td>61.6 </td> </tr> <tr> <td>AGIEval English </td> <td>3-5 </td> <td>average/acc_char </td> <td>47.1 </td> <td>47.8 </td> <td>63.0 </td> <td>64.6 </td> <td>71.6 </td> </tr> <tr> <td>CommonSenseQA </td> <td>7 </td> <td>acc_char </td> <td>72.6 </td> <td>75.0 </td> <td>83.8 </td> <td>84.1 </td> <td>85.8 </td> </tr> <tr> <td>Winogrande </td> <td>5 </td> <td>acc_char </td> <td>- </td> <td>60.5 </td> <td>- </td> <td>83.3 </td> <td>86.7 </td> </tr> <tr> <td>BIG-Bench Hard (CoT) </td> <td>3 </td> <td>average/em </td> <td>61.1 </td> <td>64.2 </td> <td>81.3 </td> <td>81.6 </td> <td>85.9 </td> </tr> <tr> <td>ARC-Challenge </td> <td>25 </td> <td>acc_char </td> <td>79.4 </td> <td>79.7 </td> <td>93.1 </td> <td>92.9 </td> <td>96.1 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki </td> <td>5 </td> <td>em </td> <td>78.5 </td> <td>77.6 </td> <td>89.7 </td> <td>89.8 </td> <td>91.8 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD </td> <td>1 </td> <td>em </td> <td>76.4 </td> <td>77.0 </td> <td>85.6 </td> <td>81.8 </td> <td>89.3 </td> </tr> <tr> <td>QuAC (F1) </td> <td>1 </td> <td>f1 </td> <td>44.4 </td> <td>44.9 </td> <td>51.1 </td> <td>51.1 </td> <td>53.6 </td> </tr> <tr> <td>BoolQ </td> <td>0 </td> <td>acc_char </td> <td>75.7 </td> <td>75.0 </td> <td>79.0 </td> <td>79.4 </td> <td>80.0 </td> </tr> <tr> <td>DROP (F1) </td> <td>3 </td> <td>f1 </td> <td>58.4 </td> <td>59.5 </td> <td>79.7 </td> <td>79.6 </td> <td>84.8 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong># Shots</strong> </td> <td><strong>Metric</strong> </td> <td><strong>Llama 3 8B Instruct</strong> </td> <td><strong>Llama 3.1 8B Instruct</strong> </td> <td><strong>Llama 3 70B Instruct</strong> </td> <td><strong>Llama 3.1 70B Instruct</strong> </td> <td><strong>Llama 3.1 405B Instruct</strong> </td> </tr> <tr> <td rowspan="4" >General </td> <td>MMLU </td> <td>5 </td> <td>macro_avg/acc </td> <td>68.5 </td> <td>69.4 </td> <td>82.0 </td> <td>83.6 </td> <td>87.3 </td> </tr> <tr> <td>MMLU (CoT) </td> <td>0 </td> <td>macro_avg/acc </td> <td>65.3 </td> <td>73.0 </td> <td>80.9 </td> <td>86.0 </td> <td>88.6 </td> </tr> <tr> <td>MMLU-Pro (CoT) </td> <td>5 </td> <td>micro_avg/acc_char </td> <td>45.5 </td> <td>48.3 </td> <td>63.4 </td> <td>66.4 </td> <td>73.3 </td> </tr> <tr> <td>IFEval </td> <td> </td> <td> </td> <td>76.8 </td> <td>80.4 </td> <td>82.9 </td> <td>87.5 </td> <td>88.6 </td> </tr> <tr> <td rowspan="2" >Reasoning </td> <td>ARC-C </td> <td>0 </td> <td>acc </td> <td>82.4 </td> <td>83.4 </td> <td>94.4 </td> <td>94.8 </td> <td>96.9 </td> </tr> <tr> <td>GPQA </td> <td>0 </td> <td>em </td> <td>34.6 </td> <td>30.4 </td> <td>39.5 </td> <td>41.7 </td> <td>50.7 </td> </tr> <tr> <td rowspan="4" >Code </td> <td>HumanEval </td> <td>0 </td> <td>pass@1 </td> <td>60.4 </td> <td>72.6 </td> <td>81.7 </td> <td>80.5 </td> <td>89.0 </td> </tr> <tr> <td>MBPP ++ base version </td> <td>0 </td> <td>pass@1 </td> <td>70.6 </td> <td>72.8 </td> <td>82.5 </td> <td>86.0 </td> <td>88.6 </td> </tr> <tr> <td>Multipl-E HumanEval </td> <td>0 </td> <td>pass@1 </td> <td>- </td> <td>50.8 </td> <td>- </td> <td>65.5 </td> <td>75.2 </td> </tr> <tr> <td>Multipl-E MBPP </td> <td>0 </td> <td>pass@1 </td> <td>- </td> <td>52.4 </td> <td>- </td> <td>62.0 </td> <td>65.7 </td> </tr> <tr> <td rowspan="2" >Math </td> <td>GSM-8K (CoT) </td> <td>8 </td> <td>em_maj1@1 </td> <td>80.6 </td> <td>84.5 </td> <td>93.0 </td> <td>95.1 </td> <td>96.8 </td> </tr> <tr> <td>MATH (CoT) </td> <td>0 </td> <td>final_em </td> <td>29.1 </td> <td>51.9 </td> <td>51.0 </td> <td>68.0 </td> <td>73.8 </td> </tr> <tr> <td rowspan="4" >Tool Use </td> <td>API-Bank </td> <td>0 </td> <td>acc </td> <td>48.3 </td> <td>82.6 </td> <td>85.1 </td> <td>90.0 </td> <td>92.0 </td> </tr> <tr> <td>BFCL </td> <td>0 </td> <td>acc </td> <td>60.3 </td> <td>76.1 </td> <td>83.0 </td> <td>84.8 </td> <td>88.5 </td> </tr> <tr> <td>Gorilla Benchmark API Bench </td> <td>0 </td> <td>acc </td> <td>1.7 </td> <td>8.2 </td> <td>14.7 </td> <td>29.7 </td> <td>35.3 </td> </tr> <tr> <td>Nexus (0-shot) </td> <td>0 </td> <td>macro_avg/acc </td> <td>18.1 </td> <td>38.5 </td> <td>47.8 </td> <td>56.7 </td> <td>58.7 </td> </tr> <tr> <td>Multilingual </td> <td>Multilingual MGSM (CoT) </td> <td>0 </td> <td>em </td> <td>- </td> <td>68.9 </td> <td>- </td> <td>86.9 </td> <td>91.6 </td> </tr> </table> #### Multilingual benchmarks <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Language</strong> </td> <td><strong>Llama 3.1 8B</strong> </td> <td><strong>Llama 3.1 70B</strong> </td> <td><strong>Llama 3.1 405B</strong> </td> </tr> <tr> <td rowspan="9" ><strong>General</strong> </td> <td rowspan="9" ><strong>MMLU (5-shot, macro_avg/acc)</strong> </td> <td>Portuguese </td> <td>62.12 </td> <td>80.13 </td> <td>84.95 </td> </tr> <tr> <td>Spanish </td> <td>62.45 </td> <td>80.05 </td> <td>85.08 </td> </tr> <tr> <td>Italian </td> <td>61.63 </td> <td>80.4 </td> <td>85.04 </td> </tr> <tr> <td>German </td> <td>60.59 </td> <td>79.27 </td> <td>84.36 </td> </tr> <tr> <td>French </td> <td>62.34 </td> <td>79.82 </td> <td>84.66 </td> </tr> <tr> <td>Hindi </td> <td>50.88 </td> <td>74.52 </td> <td>80.31 </td> </tr> <tr> <td>Thai </td> <td>50.32 </td> <td>72.95 </td> <td>78.21 </td> </tr> </table> ## Responsibility & Safety As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks: * Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama. * Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm. * Provide protections for the community to help prevent the misuse of our models. ### Responsible deployment Llama is a foundational technology designed to be used in a variety of use cases, examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models enabling the world to benefit from the technology power, by aligning our model safety for the generic use cases addressing a standard set of harms. Developers are then in the driver seat to tailor safety for their use case, defining their own policy and deploying the models with the necessary safeguards in their Llama systems. Llama 3.1 was developed following the best practices outlined in our Responsible Use Guide, you can refer to the [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to learn more. #### Llama 3.1 instruct Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. For more details on the safety mitigations implemented please read the Llama 3 paper. **Fine-tuning data** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control. **Refusals and Tone** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines. #### Llama 3.1 systems **Large language models, including Llama 3.1, are not designed to be deployed in isolation but instead should be deployed as part of an overall AI system with additional safety guardrails as required.** Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard 3, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box. #### New capabilities Note that this release introduces new capabilities, including a longer context window, multilingual inputs and outputs and possible integrations by developers with third party tools. Building with these new capabilities requires specific considerations in addition to the best practices that generally apply across all Generative AI use cases. **Tool-use**: Just like in standard software development, developers are responsible for the integration of the LLM with the tools and services of their choice. They should define a clear policy for their use case and assess the integrity of the third party services they use to be aware of the safety and security limitations when using this capability. Refer to the Responsible Use Guide for best practices on the safe deployment of the third party safeguards. **Multilinguality**: Llama 3.1 supports 7 languages in addition to English: French, German, Hindi, Italian, Portuguese, Spanish, and Thai. Llama may be able to output text in other languages than those that meet performance thresholds for safety and helpfulness. We strongly discourage developers from using this model to converse in non-supported languages without implementing finetuning and system controls in alignment with their policies and the best practices shared in the Responsible Use Guide. ### Evaluations We evaluated Llama models for common use cases as well as specific capabilities. Common use cases evaluations measure safety risks of systems for most commonly built applications including chat bot, coding assistant, tool calls. We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Llama Guard 3 to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. Prompt Guard and Code Shield are also available if relevant to the application. Capability evaluations measure vulnerabilities of Llama models inherent to specific capabilities, for which were crafted dedicated benchmarks including long context, multilingual, tools calls, coding or memorization. **Red teaming** For both scenarios, we conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets. ### Critical and other risks We specifically focused our efforts on mitigating the following critical risk areas: **1- CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive materials) helpfulness** To assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons. **2. Child Safety** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. **3. Cyber attack enablement** Our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed. Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Our study of Llama-3.1-405B’s social engineering uplift for cyber attackers was conducted to assess the effectiveness of AI models in aiding cyber threat actors in spear phishing campaigns. Please read our Llama 3.1 Cyber security whitepaper to learn more. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3.1 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.1 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3.1 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.1’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.1 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
FriendliAI/Meta-Llama-3.1-8B-Instruct-int8
FriendliAI
2024-09-09T06:29:02Z
11
1
null
[ "safetensors", "llama", "facebook", "meta", "pytorch", "llama-3", "text-generation", "conversational", "en", "de", "fr", "it", "pt", "hi", "es", "th", "arxiv:2204.05149", "base_model:meta-llama/Llama-3.1-8B", "base_model:quantized:meta-llama/Llama-3.1-8B", "license:llama3.1", "8-bit", "int8", "region:us" ]
text-generation
2024-09-09T00:52:28Z
--- language: - en - de - fr - it - pt - hi - es - th license: llama3.1 base_model: meta-llama/Meta-Llama-3.1-8B pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 extra_gated_prompt: "### LLAMA 3.1 COMMUNITY LICENSE AGREEMENT\nLlama 3.1 Version\ \ Release Date: July 23, 2024\n\"Agreement\" means the terms and conditions for\ \ use, reproduction, distribution and modification of the Llama Materials set forth\ \ herein.\n\"Documentation\" means the specifications, manuals and documentation\ \ accompanying Llama 3.1 distributed by Meta at https://llama.meta.com/doc/overview.\n\ \"Licensee\" or \"you\" means you, or your employer or any other person or entity\ \ (if you are entering into this Agreement on such person or entity’s behalf), of\ \ the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf.\n\"Llama 3.1\"\ \ means the foundational large language models and software and algorithms, including\ \ machine-learning model code, trained model weights, inference-enabling code, training-enabling\ \ code, fine-tuning enabling code and other elements of the foregoing distributed\ \ by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means,\ \ collectively, Meta’s proprietary Llama 3.1 and Documentation (and any portion\ \ thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms\ \ Ireland Limited (if you are located in or, if you are an entity, your principal\ \ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you\ \ are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\n\ a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\ \ and royalty-free limited license under Meta’s intellectual property or other rights\ \ owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy,\ \ create derivative works of, and make modifications to the Llama Materials.\nb.\ \ Redistribution and Use.\ni. If you distribute or make available the Llama Materials\ \ (or any derivative works thereof), or a product or service (including another\ \ AI model) that contains any of them, you shall (A) provide a copy of this Agreement\ \ with any such Llama Materials; and (B) prominently display “Built with Llama”\ \ on a related website, user interface, blogpost, about page, or product documentation.\ \ If you use the Llama Materials or any outputs or results of the Llama Materials\ \ to create, train, fine tune, or otherwise improve an AI model, which is distributed\ \ or made available, you shall also include “Llama” at the beginning of any such\ \ AI model name.\nii. If you receive Llama Materials, or any derivative works thereof,\ \ from a Licensee as part of an integrated end user product, then Section 2 of\ \ this Agreement will not apply to you.\niii. You must retain in all copies of the\ \ Llama Materials that you distribute the following attribution notice within a\ \ “Notice” text file distributed as a part of such copies: “Llama 3.1 is licensed\ \ under the Llama 3.1 Community License, Copyright © Meta Platforms, Inc. All Rights\ \ Reserved.”\niv. Your use of the Llama Materials must comply with applicable laws\ \ and regulations (including trade compliance laws and regulations) and adhere to\ \ the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3_1/use-policy),\ \ which is hereby incorporated by reference into this Agreement.\n2. Additional\ \ Commercial Terms. If, on the Llama 3.1 version release date, the monthly active\ \ users of the products or services made available by or for Licensee, or Licensee’s\ \ affiliates, is greater than 700 million monthly active users in the preceding\ \ calendar month, you must request a license from Meta, which Meta may grant to\ \ you in its sole discretion, and you are not authorized to exercise any of the\ \ rights under this Agreement unless or until Meta otherwise expressly grants you\ \ such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE\ \ LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS”\ \ BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY\ \ KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES\ \ OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.\ \ YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING\ \ THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA\ \ MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT\ \ WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN\ \ CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS\ \ AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL,\ \ EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED\ \ OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No\ \ trademark licenses are granted under this Agreement, and in connection with the\ \ Llama Materials, neither Meta nor Licensee may use any name or mark owned by or\ \ associated with the other or any of its affiliates, except as required for reasonable\ \ and customary use in describing and redistributing the Llama Materials or as set\ \ forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the\ \ “Mark”) solely as required to comply with the last sentence of Section 1.b.i.\ \ You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/\ \ ). All goodwill arising out of your use of the Mark will inure to the benefit\ \ of Meta.\nb. Subject to Meta’s ownership of Llama Materials and derivatives made\ \ by or for Meta, with respect to any derivative works and modifications of the\ \ Llama Materials that are made by you, as between you and Meta, you are and will\ \ be the owner of such derivative works and modifications.\nc. If you institute\ \ litigation or other proceedings against Meta or any entity (including a cross-claim\ \ or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.1 outputs\ \ or results, or any portion of any of the foregoing, constitutes infringement of\ \ intellectual property or other rights owned or licensable by you, then any licenses\ \ granted to you under this Agreement shall terminate as of the date such litigation\ \ or claim is filed or instituted. You will indemnify and hold harmless Meta from\ \ and against any claim by any third party arising out of or related to your use\ \ or distribution of the Llama Materials.\n6. Term and Termination. The term of\ \ this Agreement will commence upon your acceptance of this Agreement or access\ \ to the Llama Materials and will continue in full force and effect until terminated\ \ in accordance with the terms and conditions herein. Meta may terminate this Agreement\ \ if you are in breach of any term or condition of this Agreement. Upon termination\ \ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\ \ 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law\ \ and Jurisdiction. This Agreement will be governed and construed under the laws\ \ of the State of California without regard to choice of law principles, and the\ \ UN Convention on Contracts for the International Sale of Goods does not apply\ \ to this Agreement. The courts of California shall have exclusive jurisdiction\ \ of any dispute arising out of this Agreement.\n### Llama 3.1 Acceptable Use Policy\n\ Meta is committed to promoting safe and fair use of its tools and features, including\ \ Llama 3.1. If you access or use Llama 3.1, you agree to this Acceptable Use Policy\ \ (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3_1/use-policy](https://llama.meta.com/llama3_1/use-policy)\n\ #### Prohibited Uses\nWe want everyone to use Llama 3.1 safely and responsibly.\ \ You agree you will not use, or allow others to use, Llama 3.1 to:\n 1. Violate\ \ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\ \ contribute to, encourage, plan, incite, or further illegal or unlawful activity\ \ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\ \ or harm to children, including the solicitation, creation, acquisition, or dissemination\ \ of child exploitative content or failure to report Child Sexual Abuse Material\n\ \ 3. Human trafficking, exploitation, and sexual violence\n 4. The\ \ illegal distribution of information or materials to minors, including obscene\ \ materials, or failure to employ legally required age-gating in connection with\ \ such information or materials.\n 5. Sexual solicitation\n 6. Any\ \ other criminal activity\n 3. Engage in, promote, incite, or facilitate the\ \ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\ \ 4. Engage in, promote, incite, or facilitate discrimination or other unlawful\ \ or harmful conduct in the provision of employment, employment benefits, credit,\ \ housing, other economic benefits, or other essential goods and services\n 5.\ \ Engage in the unauthorized or unlicensed practice of any profession including,\ \ but not limited to, financial, legal, medical/health, or related professional\ \ practices\n 6. Collect, process, disclose, generate, or infer health, demographic,\ \ or other sensitive personal or private information about individuals without rights\ \ and consents required by applicable laws\n 7. Engage in or facilitate any action\ \ or generate any content that infringes, misappropriates, or otherwise violates\ \ any third-party rights, including the outputs or results of any products or services\ \ using the Llama Materials\n 8. Create, generate, or facilitate the creation\ \ of malicious code, malware, computer viruses or do anything else that could disable,\ \ overburden, interfere with or impair the proper working, integrity, operation\ \ or appearance of a website or computer system\n2. Engage in, promote, incite,\ \ facilitate, or assist in the planning or development of activities that present\ \ a risk of death or bodily harm to individuals, including use of Llama 3.1 related\ \ to the following:\n 1. Military, warfare, nuclear industries or applications,\ \ espionage, use for materials or activities that are subject to the International\ \ Traffic Arms Regulations (ITAR) maintained by the United States Department of\ \ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\ \ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\ \ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\ \ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\ \ content intended to incite or promote violence, abuse, or any infliction of bodily\ \ harm to an individual\n3. Intentionally deceive or mislead others, including use\ \ of Llama 3.1 related to the following:\n 1. Generating, promoting, or furthering\ \ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\ \ or furthering defamatory content, including the creation of defamatory statements,\ \ images, or other content\n 3. Generating, promoting, or further distributing\ \ spam\n 4. Impersonating another individual without consent, authorization,\ \ or legal right\n 5. Representing that the use of Llama 3.1 or outputs are human-generated\n\ \ 6. Generating or facilitating false online engagement, including fake reviews\ \ and other means of fake online engagement\n4. Fail to appropriately disclose to\ \ end users any known dangers of your AI system\nPlease report any violation of\ \ this Policy, software “bug,” or other problems that could lead to a violation\ \ of this Policy through one of the following means:\n * Reporting issues with\ \ the model: [https://github.com/meta-llama/llama-models/issues](https://github.com/meta-llama/llama-models/issues)\n\ \ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n\ \ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting\ \ violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]" extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- <!-- header start --> <p align="center"> <img src="https://i.imgur.com/mNM6Cai.png" width="100%" alt="Friendli Logo"> </p> <!-- header end --> # Llama 3.1 8B Instruct - INT8 - Model creator: [Meta Llama 3.1](https://huggingface.co/meta-llama) - Original model: [Llama 3.1 8B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) ## Description This repo contains the Llama 3 8B Instruct model quantized to INT8 by FriendliAI, significantly enhancing its inference efficiency while maintaining high accuracy. Check out [FriendliAI documentation](https://docs.friendli.ai/) for more details. ## License Refer to the license of the original model card. ## Compatibility This model is compatible with **[Friendli Container](https://friendli.ai/products/container/)**. ## Prerequisites - Before you begin, make sure you have signed up for [Friendli Suite](https://suite.friendli.ai/). **You can use Friendli Containers free of charge for four weeks.** - Prepare a Personal Access Token following [this guide](#preparing-personal-access-token). - Prepare a Friendli Container Secret following [this guide](#preparing-container-secret). ### Preparing Personal Access Token PAT (Personal Access Token) is the user credential for for logging into our container registry. 1. Sign in [Friendli Suite](https://suite.friendli.ai/). 2. Go to **[User Settings > Tokens](https://suite.friendli.ai/user-settings/tokens)** and click **'Create new token'**. 3. Save your created token value. ### Preparing Container Secret Container secret is a credential to launch our Friendli Container images. You should pass the container secret as an environment variable to run the container image. 1. Sign in [Friendli Suite](https://suite.friendli.ai/). 2. Go to **Container > Container Secrets** and click **'Create secret'**. 3. Save your created secret value. ### Pulling Friendli Container Image 1. Log in to the Docker client using the personal access token created as outlined in [this guide](#preparing-personal-access-token). ```sh export FRIENDLI_PAT="YOUR PAT" docker login registry.friendli.ai -u $YOUR_EMAIL -p $FRIENDLI_PAT ``` 2. Pull image ```sh docker pull registry.friendli.ai/trial ``` ## Running Friendli Container Once you've prepared the image of Friendli Container, you can launch it to create a serving endpoint. ```sh docker run \ --gpus '"device=0"' \ -p 8000:8000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ -e FRIENDLI_CONTAINER_SECRET="YOUR CONTAINER SECRET" \ registry.friendli.ai/trial \ --web-server-port 8000 \ --hf-model-name FriendliAI/Meta-Llama-3.1-8B-Instruct-int8 \ --num-devices 1 ``` --- ## Model Information The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks. **Model developer**: Meta **Model Architecture:** Llama 3.1 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Input modalities</strong> </td> <td><strong>Output modalities</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="3" >Llama 3.1 (text only) </td> <td rowspan="3" >A new mix of publicly available online data. </td> <td>8B </td> <td>Multilingual Text </td> <td>Multilingual Text and code </td> <td>128k </td> <td>Yes </td> <td rowspan="3" >15T+ </td> <td rowspan="3" >December 2023 </td> </tr> <tr> <td>70B </td> <td>Multilingual Text </td> <td>Multilingual Text and code </td> <td>128k </td> <td>Yes </td> </tr> <tr> <td>405B </td> <td>Multilingual Text </td> <td>Multilingual Text and code </td> <td>128k </td> <td>Yes </td> </tr> </table> **Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai. **Llama 3.1 family of models**. Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** July 23, 2024. **Status:** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License:** A custom commercial license, the Llama 3.1 Community License, is available at: [https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3.1 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. The Llama 3.1 model collection also supports the ability to leverage the outputs of its models to improve other models including synthetic data generation and distillation. The Llama 3.1 Community License allows for these use cases. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.1 Community License. Use in languages beyond those explicitly referenced as supported in this model card**. **<span style="text-decoration:underline;">Note</span>: Llama 3.1 has been trained on a broader collection of languages than the 8 supported languages. Developers may fine-tune Llama 3.1 models for languages beyond the 8 supported languages provided they comply with the Llama 3.1 Community License and the Acceptable Use Policy and in such cases are responsible for ensuring that any uses of Llama 3.1 in additional languages is done in a safe and responsible manner. ## How to use This repository contains two versions of Meta-Llama-3.1-8B-Instruct, for use with transformers and with the original `llama` codebase. ### Use with transformers Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function. Make sure to update your transformers installation via `pip install --upgrade transformers`. ```python import transformers import torch model_id = "meta-llama/Meta-Llama-3.1-8B-Instruct" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] outputs = pipeline( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ``` Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes) ### Tool use with transformers LLaMA-3.1 supports multiple tool use formats. You can see a full guide to prompt formatting [here](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_1/). Tool use is also supported through [chat templates](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling) in Transformers. Here is a quick example showing a single simple tool: ```python # First, define a tool def get_current_temperature(location: str) -> float: """ Get the current temperature at a location. Args: location: The location to get the temperature for, in the format "City, Country" Returns: The current temperature at the specified location in the specified units, as a float. """ return 22. # A real function should probably actually get the temperature! # Next, create a chat and apply the chat template messages = [ {"role": "system", "content": "You are a bot that responds to weather queries."}, {"role": "user", "content": "Hey, what's the temperature in Paris right now?"} ] inputs = tokenizer.apply_chat_template(messages, tools=[get_current_temperature], add_generation_prompt=True) ``` You can then generate text from this input as normal. If the model generates a tool call, you should add it to the chat like so: ```python tool_call = {"name": "get_current_temperature", "arguments": {"location": "Paris, France"}} messages.append({"role": "assistant", "tool_calls": [{"type": "function", "function": tool_call}]}) ``` and then call the tool and append the result, with the `tool` role, like so: ```python messages.append({"role": "tool", "name": "get_current_temperature", "content": "22.0"}) ``` After that, you can `generate()` again to let the model use the tool result in the chat. Note that this was a very brief introduction to tool calling - for more information, see the [LLaMA prompt format docs](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_1/) and the Transformers [tool use documentation](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling). ### Use with `llama` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama) To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3.1-8B-Instruct --include "original/*" --local-dir Meta-Llama-3.1-8B-Instruct ``` ## Hardware and Software **Training Factors** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure. **Training utilized a cumulative of** 39.3M GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency. **Training Greenhouse Gas Emissions** Estimated total location-based greenhouse gas emissions were **11,390** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy, therefore the total market-based greenhouse gas emissions for training were 0 tons CO2eq. <table> <tr> <td> </td> <td><strong>Training Time (GPU hours)</strong> </td> <td><strong>Training Power Consumption (W)</strong> </td> <td><strong>Training Location-Based Greenhouse Gas Emissions</strong> <p> <strong>(tons CO2eq)</strong> </td> <td><strong>Training Market-Based Greenhouse Gas Emissions</strong> <p> <strong>(tons CO2eq)</strong> </td> </tr> <tr> <td>Llama 3.1 8B </td> <td>1.46M </td> <td>700 </td> <td>420 </td> <td>0 </td> </tr> <tr> <td>Llama 3.1 70B </td> <td>7.0M </td> <td>700 </td> <td>2,040 </td> <td>0 </td> </tr> <tr> <td>Llama 3.1 405B </td> <td>30.84M </td> <td>700 </td> <td>8,930 </td> <td>0 </td> </tr> <tr> <td>Total </td> <td>39.3M <td> <ul> </ul> </td> <td>11,390 </td> <td>0 </td> </tr> </table> The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others. ## Training Data **Overview:** Llama 3.1 was pretrained on ~15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 25M synthetically generated examples. **Data Freshness:** The pretraining data has a cutoff of December 2023. ## Benchmark scores In this section, we report the results for Llama 3.1 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong># Shots</strong> </td> <td><strong>Metric</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 3.1 8B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 3.1 70B</strong> </td> <td><strong>Llama 3.1 405B</strong> </td> </tr> <tr> <td rowspan="7" >General </td> <td>MMLU </td> <td>5 </td> <td>macro_avg/acc_char </td> <td>66.7 </td> <td>66.7 </td> <td>79.5 </td> <td>79.3 </td> <td>85.2 </td> </tr> <tr> <td>MMLU-Pro (CoT) </td> <td>5 </td> <td>macro_avg/acc_char </td> <td>36.2 </td> <td>37.1 </td> <td>55.0 </td> <td>53.8 </td> <td>61.6 </td> </tr> <tr> <td>AGIEval English </td> <td>3-5 </td> <td>average/acc_char </td> <td>47.1 </td> <td>47.8 </td> <td>63.0 </td> <td>64.6 </td> <td>71.6 </td> </tr> <tr> <td>CommonSenseQA </td> <td>7 </td> <td>acc_char </td> <td>72.6 </td> <td>75.0 </td> <td>83.8 </td> <td>84.1 </td> <td>85.8 </td> </tr> <tr> <td>Winogrande </td> <td>5 </td> <td>acc_char </td> <td>- </td> <td>60.5 </td> <td>- </td> <td>83.3 </td> <td>86.7 </td> </tr> <tr> <td>BIG-Bench Hard (CoT) </td> <td>3 </td> <td>average/em </td> <td>61.1 </td> <td>64.2 </td> <td>81.3 </td> <td>81.6 </td> <td>85.9 </td> </tr> <tr> <td>ARC-Challenge </td> <td>25 </td> <td>acc_char </td> <td>79.4 </td> <td>79.7 </td> <td>93.1 </td> <td>92.9 </td> <td>96.1 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki </td> <td>5 </td> <td>em </td> <td>78.5 </td> <td>77.6 </td> <td>89.7 </td> <td>89.8 </td> <td>91.8 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD </td> <td>1 </td> <td>em </td> <td>76.4 </td> <td>77.0 </td> <td>85.6 </td> <td>81.8 </td> <td>89.3 </td> </tr> <tr> <td>QuAC (F1) </td> <td>1 </td> <td>f1 </td> <td>44.4 </td> <td>44.9 </td> <td>51.1 </td> <td>51.1 </td> <td>53.6 </td> </tr> <tr> <td>BoolQ </td> <td>0 </td> <td>acc_char </td> <td>75.7 </td> <td>75.0 </td> <td>79.0 </td> <td>79.4 </td> <td>80.0 </td> </tr> <tr> <td>DROP (F1) </td> <td>3 </td> <td>f1 </td> <td>58.4 </td> <td>59.5 </td> <td>79.7 </td> <td>79.6 </td> <td>84.8 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong># Shots</strong> </td> <td><strong>Metric</strong> </td> <td><strong>Llama 3 8B Instruct</strong> </td> <td><strong>Llama 3.1 8B Instruct</strong> </td> <td><strong>Llama 3 70B Instruct</strong> </td> <td><strong>Llama 3.1 70B Instruct</strong> </td> <td><strong>Llama 3.1 405B Instruct</strong> </td> </tr> <tr> <td rowspan="4" >General </td> <td>MMLU </td> <td>5 </td> <td>macro_avg/acc </td> <td>68.5 </td> <td>69.4 </td> <td>82.0 </td> <td>83.6 </td> <td>87.3 </td> </tr> <tr> <td>MMLU (CoT) </td> <td>0 </td> <td>macro_avg/acc </td> <td>65.3 </td> <td>73.0 </td> <td>80.9 </td> <td>86.0 </td> <td>88.6 </td> </tr> <tr> <td>MMLU-Pro (CoT) </td> <td>5 </td> <td>micro_avg/acc_char </td> <td>45.5 </td> <td>48.3 </td> <td>63.4 </td> <td>66.4 </td> <td>73.3 </td> </tr> <tr> <td>IFEval </td> <td> </td> <td> </td> <td>76.8 </td> <td>80.4 </td> <td>82.9 </td> <td>87.5 </td> <td>88.6 </td> </tr> <tr> <td rowspan="2" >Reasoning </td> <td>ARC-C </td> <td>0 </td> <td>acc </td> <td>82.4 </td> <td>83.4 </td> <td>94.4 </td> <td>94.8 </td> <td>96.9 </td> </tr> <tr> <td>GPQA </td> <td>0 </td> <td>em </td> <td>34.6 </td> <td>30.4 </td> <td>39.5 </td> <td>41.7 </td> <td>50.7 </td> </tr> <tr> <td rowspan="4" >Code </td> <td>HumanEval </td> <td>0 </td> <td>pass@1 </td> <td>60.4 </td> <td>72.6 </td> <td>81.7 </td> <td>80.5 </td> <td>89.0 </td> </tr> <tr> <td>MBPP ++ base version </td> <td>0 </td> <td>pass@1 </td> <td>70.6 </td> <td>72.8 </td> <td>82.5 </td> <td>86.0 </td> <td>88.6 </td> </tr> <tr> <td>Multipl-E HumanEval </td> <td>0 </td> <td>pass@1 </td> <td>- </td> <td>50.8 </td> <td>- </td> <td>65.5 </td> <td>75.2 </td> </tr> <tr> <td>Multipl-E MBPP </td> <td>0 </td> <td>pass@1 </td> <td>- </td> <td>52.4 </td> <td>- </td> <td>62.0 </td> <td>65.7 </td> </tr> <tr> <td rowspan="2" >Math </td> <td>GSM-8K (CoT) </td> <td>8 </td> <td>em_maj1@1 </td> <td>80.6 </td> <td>84.5 </td> <td>93.0 </td> <td>95.1 </td> <td>96.8 </td> </tr> <tr> <td>MATH (CoT) </td> <td>0 </td> <td>final_em </td> <td>29.1 </td> <td>51.9 </td> <td>51.0 </td> <td>68.0 </td> <td>73.8 </td> </tr> <tr> <td rowspan="4" >Tool Use </td> <td>API-Bank </td> <td>0 </td> <td>acc </td> <td>48.3 </td> <td>82.6 </td> <td>85.1 </td> <td>90.0 </td> <td>92.0 </td> </tr> <tr> <td>BFCL </td> <td>0 </td> <td>acc </td> <td>60.3 </td> <td>76.1 </td> <td>83.0 </td> <td>84.8 </td> <td>88.5 </td> </tr> <tr> <td>Gorilla Benchmark API Bench </td> <td>0 </td> <td>acc </td> <td>1.7 </td> <td>8.2 </td> <td>14.7 </td> <td>29.7 </td> <td>35.3 </td> </tr> <tr> <td>Nexus (0-shot) </td> <td>0 </td> <td>macro_avg/acc </td> <td>18.1 </td> <td>38.5 </td> <td>47.8 </td> <td>56.7 </td> <td>58.7 </td> </tr> <tr> <td>Multilingual </td> <td>Multilingual MGSM (CoT) </td> <td>0 </td> <td>em </td> <td>- </td> <td>68.9 </td> <td>- </td> <td>86.9 </td> <td>91.6 </td> </tr> </table> #### Multilingual benchmarks <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Language</strong> </td> <td><strong>Llama 3.1 8B</strong> </td> <td><strong>Llama 3.1 70B</strong> </td> <td><strong>Llama 3.1 405B</strong> </td> </tr> <tr> <td rowspan="9" ><strong>General</strong> </td> <td rowspan="9" ><strong>MMLU (5-shot, macro_avg/acc)</strong> </td> <td>Portuguese </td> <td>62.12 </td> <td>80.13 </td> <td>84.95 </td> </tr> <tr> <td>Spanish </td> <td>62.45 </td> <td>80.05 </td> <td>85.08 </td> </tr> <tr> <td>Italian </td> <td>61.63 </td> <td>80.4 </td> <td>85.04 </td> </tr> <tr> <td>German </td> <td>60.59 </td> <td>79.27 </td> <td>84.36 </td> </tr> <tr> <td>French </td> <td>62.34 </td> <td>79.82 </td> <td>84.66 </td> </tr> <tr> <td>Hindi </td> <td>50.88 </td> <td>74.52 </td> <td>80.31 </td> </tr> <tr> <td>Thai </td> <td>50.32 </td> <td>72.95 </td> <td>78.21 </td> </tr> </table> ## Responsibility & Safety As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks: * Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama. * Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm. * Provide protections for the community to help prevent the misuse of our models. ### Responsible deployment Llama is a foundational technology designed to be used in a variety of use cases, examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models enabling the world to benefit from the technology power, by aligning our model safety for the generic use cases addressing a standard set of harms. Developers are then in the driver seat to tailor safety for their use case, defining their own policy and deploying the models with the necessary safeguards in their Llama systems. Llama 3.1 was developed following the best practices outlined in our Responsible Use Guide, you can refer to the [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to learn more. #### Llama 3.1 instruct Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. For more details on the safety mitigations implemented please read the Llama 3 paper. **Fine-tuning data** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control. **Refusals and Tone** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines. #### Llama 3.1 systems **Large language models, including Llama 3.1, are not designed to be deployed in isolation but instead should be deployed as part of an overall AI system with additional safety guardrails as required.** Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard 3, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box. #### New capabilities Note that this release introduces new capabilities, including a longer context window, multilingual inputs and outputs and possible integrations by developers with third party tools. Building with these new capabilities requires specific considerations in addition to the best practices that generally apply across all Generative AI use cases. **Tool-use**: Just like in standard software development, developers are responsible for the integration of the LLM with the tools and services of their choice. They should define a clear policy for their use case and assess the integrity of the third party services they use to be aware of the safety and security limitations when using this capability. Refer to the Responsible Use Guide for best practices on the safe deployment of the third party safeguards. **Multilinguality**: Llama 3.1 supports 7 languages in addition to English: French, German, Hindi, Italian, Portuguese, Spanish, and Thai. Llama may be able to output text in other languages than those that meet performance thresholds for safety and helpfulness. We strongly discourage developers from using this model to converse in non-supported languages without implementing finetuning and system controls in alignment with their policies and the best practices shared in the Responsible Use Guide. ### Evaluations We evaluated Llama models for common use cases as well as specific capabilities. Common use cases evaluations measure safety risks of systems for most commonly built applications including chat bot, coding assistant, tool calls. We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Llama Guard 3 to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. Prompt Guard and Code Shield are also available if relevant to the application. Capability evaluations measure vulnerabilities of Llama models inherent to specific capabilities, for which were crafted dedicated benchmarks including long context, multilingual, tools calls, coding or memorization. **Red teaming** For both scenarios, we conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets. ### Critical and other risks We specifically focused our efforts on mitigating the following critical risk areas: **1- CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive materials) helpfulness** To assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons. **2. Child Safety** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. **3. Cyber attack enablement** Our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed. Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Our study of Llama-3.1-405B’s social engineering uplift for cyber attackers was conducted to assess the effectiveness of AI models in aiding cyber threat actors in spear phishing campaigns. Please read our Llama 3.1 Cyber security whitepaper to learn more. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3.1 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.1 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3.1 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.1’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.1 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
koesn/dolphin-2.9.4-llama3.1-8b-GGUF
koesn
2024-09-09T06:26:08Z
34
1
null
[ "gguf", "generated_from_trainer", "dataset:cognitivecomputations/Dolphin-2.9", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "dataset:cognitivecomputations/dolphin-coder", "dataset:cognitivecomputations/samantha-data", "dataset:microsoft/orca-math-word-problems-200k", "dataset:mlabonne/FineTome-100k", "dataset:arcee/agent_data", "dataset:PawanKrd/math-gpt-4o-200k", "dataset:cognitivecomputations/SystemChat-2.0", "base_model:meta-llama/Llama-3.1-8B", "base_model:quantized:meta-llama/Llama-3.1-8B", "license:llama3.1", "endpoints_compatible", "region:us", "conversational" ]
null
2024-09-09T05:23:49Z
--- license: llama3.1 base_model: meta-llama/Meta-Llama-3.1-8B tags: - generated_from_trainer datasets: - cognitivecomputations/Dolphin-2.9 - m-a-p/CodeFeedback-Filtered-Instruction - cognitivecomputations/dolphin-coder - cognitivecomputations/samantha-data - microsoft/orca-math-word-problems-200k - mlabonne/FineTome-100k - arcee/agent_data - PawanKrd/math-gpt-4o-200k - cognitivecomputations/SystemChat-2.0 --- ## Description This repo contains GGUF format model files for dolphin-2.9.4-llama3.1-8b. ## Files Provided | Name | Quant | Bits | File Size | Remark | | ----------------------------------- | ----- | ---- | --------- | -------------------------------- | | dolphin-2.9.4-llama3.1-8b.Q2_K.gguf | Q2_K | 2 | 3.18 GB | 2.96G, +3.5199 ppl @ Llama-3-8B | | dolphin-2.9.4-llama3.1-8b.Q3_K.gguf | Q3_K | 3 | 4.02 GB | 3.74G, +0.6569 ppl @ Llama-3-8B | | dolphin-2.9.4-llama3.1-8b.Q4_0.gguf | Q4_0 | 4 | 4.66 GB | 4.34G, +0.4685 ppl @ Llama-3-8B | | dolphin-2.9.4-llama3.1-8b.Q4_K.gguf | Q4_K | 4 | 4.92 GB | 4.58G, +0.1754 ppl @ Llama-3-8B | | dolphin-2.9.4-llama3.1-8b.Q5_K.gguf | Q5_K | 5 | 5.73 GB | 5.33G, +0.0569 ppl @ Llama-3-8B | | dolphin-2.9.4-llama3.1-8b.Q6_K.gguf | Q6_K | 6 | 6.60 GB | 6.14G, +0.0217 ppl @ Llama-3-8B | | dolphin-2.9.4-llama3.1-8b.Q8_0.gguf | Q8_0 | 8 | 8.54 GB | 7.96G, +0.0026 ppl @ Llama-3-8B | ## Parameters | path | type | architecture | rope_theta | sliding_win | max_pos_embed | | ----------------------------------------------- | ----- | ---------------- | ---------- | ----------- | ------------- | | cognitivecomputations/dolphin-2.9.4-llama3.1-8b | llama | LlamaForCausalLM | 500000.0 | null | 131072 | # Original Model Card # Dolphin 2.9.4 Llama 3.1 8b 🐬 Curated and trained by Eric Hartford and Cognitive Computations [![Discord](https://img.shields.io/discord/1156064224225808488?logo=Discord&logoColor=%23ffffff&label=Discord&link=https%3A%2F%2Fdiscord.gg%2FtCMkMDDHwm)](https://discord.gg/h3K4XGj2RH) Discord: https://discord.gg/h3K4XGj2RH <img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" /> Our appreciation for the sponsors of Dolphin 2.9.4: - [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 8xL40S node This model is based on Meta Llama 3.1 8b, and is governed by the Llama 3.1 license. The base model has 128K context, and our finetuning used 8192 sequence length. Dolphin 2.9.4 uses ChatML prompt template format. example: ``` <|im_start|>system You are Dolphin, a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` Dolphin-2.9.4 has a variety of instruction following, conversational, and coding skills. It also has agentic abilities and supports function calling. It is especially trained to obey the system prompt, and follow instructions in many languages. Dolphin is uncensored. We have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly. <details><summary>Evals</summary> ``` hf (pretrained=/workspace/axolotl/dolphin-2.9.4-llama3.1-8b-hf,dtype=bfloat16), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: auto (4) | Tasks |Version|Filter|n-shot| Metric | |Value | |Stderr| |-----------------------------------------------------------|-------|------|-----:|-----------------------|---|-----:|---|------| |leaderboard |N/A |none | 0|acc |↑ |0.2926|± |0.0041| | | |none | 0|acc_norm |↑ |0.4513|± |0.0053| | | |none | 0|exact_match |↑ |0.0982|± |0.0079| | | |none | 0|inst_level_loose_acc |↑ |0.3825|± |N/A | | | |none | 0|inst_level_strict_acc |↑ |0.3597|± |N/A | | | |none | 0|prompt_level_loose_acc |↑ |0.2421|± |0.0184| | | |none | 0|prompt_level_strict_acc|↑ |0.2181|± |0.0178| | - leaderboard_bbh |N/A |none | 3|acc_norm |↑ |0.4931|± |0.0061| | - leaderboard_bbh_boolean_expressions | 0|none | 3|acc_norm |↑ |0.8000|± |0.0253| | - leaderboard_bbh_causal_judgement | 0|none | 3|acc_norm |↑ |0.5615|± |0.0364| | - leaderboard_bbh_date_understanding | 0|none | 3|acc_norm |↑ |0.4520|± |0.0315| | - leaderboard_bbh_disambiguation_qa | 0|none | 3|acc_norm |↑ |0.6640|± |0.0299| | - leaderboard_bbh_formal_fallacies | 0|none | 3|acc_norm |↑ |0.5600|± |0.0315| | - leaderboard_bbh_geometric_shapes | 0|none | 3|acc_norm |↑ |0.3640|± |0.0305| | - leaderboard_bbh_hyperbaton | 0|none | 3|acc_norm |↑ |0.6320|± |0.0306| | - leaderboard_bbh_logical_deduction_five_objects | 0|none | 3|acc_norm |↑ |0.4600|± |0.0316| | - leaderboard_bbh_logical_deduction_seven_objects | 0|none | 3|acc_norm |↑ |0.4360|± |0.0314| | - leaderboard_bbh_logical_deduction_three_objects | 0|none | 3|acc_norm |↑ |0.6160|± |0.0308| | - leaderboard_bbh_movie_recommendation | 0|none | 3|acc_norm |↑ |0.7880|± |0.0259| | - leaderboard_bbh_navigate | 0|none | 3|acc_norm |↑ |0.5200|± |0.0317| | - leaderboard_bbh_object_counting | 0|none | 3|acc_norm |↑ |0.4520|± |0.0315| | - leaderboard_bbh_penguins_in_a_table | 0|none | 3|acc_norm |↑ |0.5205|± |0.0415| | - leaderboard_bbh_reasoning_about_colored_objects | 0|none | 3|acc_norm |↑ |0.5120|± |0.0317| | - leaderboard_bbh_ruin_names | 0|none | 3|acc_norm |↑ |0.6320|± |0.0306| | - leaderboard_bbh_salient_translation_error_detection | 0|none | 3|acc_norm |↑ |0.4320|± |0.0314| | - leaderboard_bbh_snarks | 0|none | 3|acc_norm |↑ |0.5843|± |0.0370| | - leaderboard_bbh_sports_understanding | 0|none | 3|acc_norm |↑ |0.7040|± |0.0289| | - leaderboard_bbh_temporal_sequences | 0|none | 3|acc_norm |↑ |0.1440|± |0.0222| | - leaderboard_bbh_tracking_shuffled_objects_five_objects | 0|none | 3|acc_norm |↑ |0.1560|± |0.0230| | - leaderboard_bbh_tracking_shuffled_objects_seven_objects| 0|none | 3|acc_norm |↑ |0.1320|± |0.0215| | - leaderboard_bbh_tracking_shuffled_objects_three_objects| 0|none | 3|acc_norm |↑ |0.2840|± |0.0286| | - leaderboard_bbh_web_of_lies | 0|none | 3|acc_norm |↑ |0.4840|± |0.0317| | - leaderboard_gpqa |N/A |none | 0|acc_norm |↑ |0.2903|± |0.0132| | - leaderboard_gpqa_diamond | 1|none | 0|acc_norm |↑ |0.2980|± |0.0326| | - leaderboard_gpqa_extended | 1|none | 0|acc_norm |↑ |0.2839|± |0.0193| | - leaderboard_gpqa_main | 1|none | 0|acc_norm |↑ |0.2946|± |0.0216| | - leaderboard_ifeval | 2|none | 0|inst_level_loose_acc |↑ |0.3825|± |N/A | | | |none | 0|inst_level_strict_acc |↑ |0.3597|± |N/A | | | |none | 0|prompt_level_loose_acc |↑ |0.2421|± |0.0184| | | |none | 0|prompt_level_strict_acc|↑ |0.2181|± |0.0178| | - leaderboard_math_algebra_hard | 1|none | 4|exact_match |↑ |0.1596|± |0.0209| | - leaderboard_math_counting_and_prob_hard | 1|none | 4|exact_match |↑ |0.0488|± |0.0195| | - leaderboard_math_geometry_hard | 1|none | 4|exact_match |↑ |0.0530|± |0.0196| | - leaderboard_math_hard |N/A |none | 4|exact_match |↑ |0.0982|± |0.0079| | - leaderboard_math_intermediate_algebra_hard | 1|none | 4|exact_match |↑ |0.0143|± |0.0071| | - leaderboard_math_num_theory_hard | 1|none | 4|exact_match |↑ |0.0455|± |0.0168| | - leaderboard_math_prealgebra_hard | 1|none | 4|exact_match |↑ |0.2591|± |0.0316| | - leaderboard_math_precalculus_hard | 1|none | 4|exact_match |↑ |0.0519|± |0.0192| | - leaderboard_mmlu_pro | 0.1|none | 5|acc |↑ |0.2926|± |0.0041| | - leaderboard_musr |N/A |none | 0|acc_norm |↑ |0.3862|± |0.0173| | - leaderboard_musr_murder_mysteries | 1|none | 0|acc_norm |↑ |0.5280|± |0.0316| | - leaderboard_musr_object_placements | 1|none | 0|acc_norm |↑ |0.3594|± |0.0300| | - leaderboard_musr_team_allocation | 1|none | 0|acc_norm |↑ |0.2720|± |0.0282| | Groups |Version|Filter|n-shot| Metric | |Value | |Stderr| |------------------------|-------|------|-----:|-----------------------|---|-----:|---|------| |leaderboard |N/A |none | 0|acc |↑ |0.2926|± |0.0041| | | |none | 0|acc_norm |↑ |0.4513|± |0.0053| | | |none | 0|exact_match |↑ |0.0982|± |0.0079| | | |none | 0|inst_level_loose_acc |↑ |0.3825|± |N/A | | | |none | 0|inst_level_strict_acc |↑ |0.3597|± |N/A | | | |none | 0|prompt_level_loose_acc |↑ |0.2421|± |0.0184| | | |none | 0|prompt_level_strict_acc|↑ |0.2181|± |0.0178| | - leaderboard_bbh |N/A |none | 3|acc_norm |↑ |0.4931|± |0.0061| | - leaderboard_gpqa |N/A |none | 0|acc_norm |↑ |0.2903|± |0.0132| | - leaderboard_math_hard|N/A |none | 4|exact_match |↑ |0.0982|± |0.0079| | - leaderboard_musr |N/A |none | 0|acc_norm |↑ |0.3862|± |0.0173| ``` </details> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml base_model: meta-llama/Meta-Llama-3.1-8B model_type: LlamaForCausalLM tokenizer_type: AutoTokenizer load_in_8bit: false # load_in_4bit: true strict: false datasets: - path: /workspace/datasets/dolphin-2.9.4/dolphin201-sharegpt2.jsonl type: sharegpt conversation: chatml chat_template: chatml # adapter: qlora # lora_r: 128 # lora_alpha: 16 # lora_modules_to_save: [embed_tokens, lm_head] # lora_dropout: 0.05 # lora_target_linear: true unfrozen_parameters: - input_layernorm - model.norm - post_attention_layernorm - self_attn.rotary_emb - ^lm_head.weight$ - ^model.embed_tokens.weight$ # mlp.down_proj layers - model.layers.1.mlp.down_proj - model.layers.0.mlp.down_proj - model.layers.30.mlp.down_proj - model.layers.2.mlp.down_proj - model.layers.21.mlp.down_proj - model.layers.22.mlp.down_proj - model.layers.29.mlp.down_proj - model.layers.5.mlp.down_proj - model.layers.4.mlp.down_proj - model.layers.20.mlp.down_proj - model.layers.23.mlp.down_proj - model.layers.19.mlp.down_proj - model.layers.3.mlp.down_proj - model.layers.17.mlp.down_proj - model.layers.6.mlp.down_proj - model.layers.31.mlp.down_proj # mlp.up_proj layers - model.layers.4.mlp.up_proj - model.layers.3.mlp.up_proj - model.layers.0.mlp.up_proj - model.layers.5.mlp.up_proj - model.layers.7.mlp.up_proj - model.layers.6.mlp.up_proj - model.layers.2.mlp.up_proj - model.layers.1.mlp.up_proj - model.layers.8.mlp.up_proj - model.layers.12.mlp.up_proj - model.layers.14.mlp.up_proj - model.layers.9.mlp.up_proj - model.layers.15.mlp.up_proj - model.layers.17.mlp.up_proj - model.layers.13.mlp.up_proj - model.layers.19.mlp.up_proj # self_attn.k_proj layers - model.layers.29.self_attn.k_proj - model.layers.25.self_attn.k_proj - model.layers.23.self_attn.k_proj - model.layers.28.self_attn.k_proj - model.layers.21.self_attn.k_proj - model.layers.19.self_attn.k_proj - model.layers.22.self_attn.k_proj - model.layers.20.self_attn.k_proj - model.layers.24.self_attn.k_proj - model.layers.31.self_attn.k_proj - model.layers.27.self_attn.k_proj - model.layers.26.self_attn.k_proj - model.layers.17.self_attn.k_proj - model.layers.11.self_attn.k_proj - model.layers.18.self_attn.k_proj - model.layers.14.self_attn.k_proj # self_attn.o_proj layers - model.layers.14.self_attn.o_proj - model.layers.7.self_attn.o_proj - model.layers.5.self_attn.o_proj - model.layers.11.self_attn.o_proj - model.layers.6.self_attn.o_proj - model.layers.24.self_attn.o_proj - model.layers.9.self_attn.o_proj - model.layers.13.self_attn.o_proj - model.layers.10.self_attn.o_proj - model.layers.12.self_attn.o_proj - model.layers.8.self_attn.o_proj - model.layers.25.self_attn.o_proj - model.layers.21.self_attn.o_proj - model.layers.23.self_attn.o_proj - model.layers.15.self_attn.o_proj - model.layers.16.self_attn.o_proj # self_attn.q_proj layers - model.layers.8.self_attn.q_proj - model.layers.13.self_attn.q_proj - model.layers.9.self_attn.q_proj - model.layers.14.self_attn.q_proj - model.layers.10.self_attn.q_proj - model.layers.11.self_attn.q_proj - model.layers.0.self_attn.q_proj - model.layers.15.self_attn.q_proj - model.layers.1.self_attn.q_proj - model.layers.6.self_attn.q_proj - model.layers.5.self_attn.q_proj - model.layers.7.self_attn.q_proj - model.layers.12.self_attn.q_proj - model.layers.16.self_attn.q_proj - model.layers.17.self_attn.q_proj - model.layers.26.self_attn.q_proj # self_attn.v_proj layers - model.layers.26.self_attn.v_proj - model.layers.17.self_attn.v_proj - model.layers.3.self_attn.v_proj - model.layers.28.self_attn.v_proj - model.layers.29.self_attn.v_proj - model.layers.21.self_attn.v_proj - model.layers.15.self_attn.v_proj - model.layers.16.self_attn.v_proj - model.layers.20.self_attn.v_proj - model.layers.25.self_attn.v_proj - model.layers.6.self_attn.v_proj - model.layers.23.self_attn.v_proj - model.layers.4.self_attn.v_proj - model.layers.1.self_attn.v_proj - model.layers.22.self_attn.v_proj - model.layers.14.self_attn.v_proj # mlp.gate_proj layers - model.layers.1.mlp.gate_proj - model.layers.2.mlp.gate_proj - model.layers.3.mlp.gate_proj - model.layers.4.mlp.gate_proj - model.layers.0.mlp.gate_proj - model.layers.25.mlp.gate_proj - model.layers.26.mlp.gate_proj - model.layers.5.mlp.gate_proj - model.layers.24.mlp.gate_proj - model.layers.28.mlp.gate_proj - model.layers.23.mlp.gate_proj - model.layers.27.mlp.gate_proj - model.layers.21.mlp.gate_proj - model.layers.22.mlp.gate_proj - model.layers.29.mlp.gate_proj - model.layers.20.mlp.gate_proj dataset_prepared_path: /workspace/axolotl/dolph-2.9.4-nemo-prepared val_set_size: 0.01 output_dir: /workspace/axolotl/dolphin-2.9.4-llama3.1-8b sequence_len: 8192 sample_packing: true pad_to_sequence_len: true wandb_project: dolphin-2.9.4-llama3.1-8b wandb_watch: wandb_run_id: wandb_log_model: gradient_accumulation_steps: 16 micro_batch_size: 2 num_epochs: 3 optimizer: adamw_torch lr_scheduler: cosine learning_rate: 5e-6 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false early_stopping_patience: resume_from_checkpoint: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 100 # evals_per_epoch: 4 eval_table_size: saves_per_epoch: 1 save_total_limit: 2 save_steps: debug: deepspeed: deepspeed_configs/zero3_bf16.json weight_decay: 0.1 special_tokens: eos_token: "<|im_end|>" bos_token: "<|begin_of_text|>" pad_token: "<|finetune_right_pad_id|>" tokens: - "<|im_start|>" # fsdp: # - full_shard # - auto_wrap # fsdp_config: # fsdp_limit_all_gathers: true # fsdp_sync_module_states: true # fsdp_offload_params: true # fsdp_use_orig_params: false # fsdp_cpu_ram_efficient_loading: true # fsdp_transformer_layer_cls_to_wrap: MixtralSparseMoeBlock # fsdp_state_dict_type: FULL_STATE_DICT # fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP # fsdp_sharding_strategy: FULL_SHARD # fsdp_forward_prefetch: false # fsdp_backward_prefetch: BACKWARD_PRE ``` </details><br> # workspace/axolotl/dolphin-2.9.4-llama3.1-8b This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5655 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 16 - total_train_batch_size: 256 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.5837 | 1.0180 | 1161 | 0.5814 | | 0.5525 | 2.0179 | 2322 | 0.5671 | | 0.5514 | 2.9624 | 3420 | 0.5655 | ### Framework versions - Transformers 4.44.0.dev0 - Pytorch 2.4.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
akshitguptafintek24/exxon-semantic-search
akshitguptafintek24
2024-09-09T06:25:02Z
7
0
sentence-transformers
[ "sentence-transformers", "safetensors", "mpnet", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:50", "loss:CosineSimilarityLoss", "arxiv:1908.10084", "base_model:sentence-transformers/all-mpnet-base-v2", "base_model:finetune:sentence-transformers/all-mpnet-base-v2", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-09-09T06:22:32Z
--- base_model: sentence-transformers/all-mpnet-base-v2 datasets: [] language: [] library_name: sentence-transformers metrics: - pearson_cosine - spearman_cosine - pearson_manhattan - spearman_manhattan - pearson_euclidean - spearman_euclidean - pearson_dot - spearman_dot - pearson_max - spearman_max pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:50 - loss:CosineSimilarityLoss widget: - source_sentence: Freepoint Commodity services venture sentences: - DUPLI OF 823707 BITUBULK SRL VESSEL - Freepoint Commodities LLC - AUGUSTA ENERGY DMCC - source_sentence: BNG INT private ltd sentences: - BGN INT DMCC - Count Energy PA - BB Energy Group Holding Ltd - source_sentence: Act fuel ball venture sentences: - ADDAX ENERGY SA - BITUME INVEST S.A.R.L - Altis Group International, LLC - source_sentence: BW gas product ltd sentences: - Bulk Trading SA - BINH SON REFINING AND PETRO LPIINTL - BW LPG PRODUCT SERVICES LPIINTL - source_sentence: Altis private limited sentences: - E1 Corporation - Diersch & Schrder GmbH & Co. KG - Altis Group International, LLC model-index: - name: SentenceTransformer based on sentence-transformers/all-mpnet-base-v2 results: - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts test type: sts-test metrics: - type: pearson_cosine value: 0.9446733306821109 name: Pearson Cosine - type: spearman_cosine value: 0.9249801057480238 name: Spearman Cosine - type: pearson_manhattan value: 0.9624404790642681 name: Pearson Manhattan - type: spearman_manhattan value: 0.9269933391918109 name: Spearman Manhattan - type: pearson_euclidean value: 0.9638295828361044 name: Pearson Euclidean - type: spearman_euclidean value: 0.9249801057480238 name: Spearman Euclidean - type: pearson_dot value: 0.9446733259374165 name: Pearson Dot - type: spearman_dot value: 0.9249801057480238 name: Spearman Dot - type: pearson_max value: 0.9638295828361044 name: Pearson Max - type: spearman_max value: 0.9269933391918109 name: Spearman Max --- # SentenceTransformer based on sentence-transformers/all-mpnet-base-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) on the train dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) <!-- at revision 84f2bcc00d77236f9e89c8a360a00fb1139bf47d --> - **Maximum Sequence Length:** 384 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity - **Training Dataset:** - train <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("akshitguptafintek24/exxon-semantic-search") # Run inference sentences = [ 'Altis private limited', 'Altis Group International, LLC', 'Diersch & Schrder GmbH & Co. KG', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Dataset: `sts-test` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:----------| | pearson_cosine | 0.9447 | | **spearman_cosine** | **0.925** | | pearson_manhattan | 0.9624 | | spearman_manhattan | 0.927 | | pearson_euclidean | 0.9638 | | spearman_euclidean | 0.925 | | pearson_dot | 0.9447 | | spearman_dot | 0.925 | | pearson_max | 0.9638 | | spearman_max | 0.927 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### train * Dataset: train * Size: 50 training samples * Columns: <code>Applicant name</code>, <code>Customer name</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | Applicant name | Customer name | score | |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 5 tokens</li><li>mean: 7.36 tokens</li><li>max: 16 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 8.32 tokens</li><li>max: 17 tokens</li></ul> | <ul><li>min: 0.54</li><li>mean: 0.86</li><li>max: 1.0</li></ul> | * Samples: | Applicant name | Customer name | score | |:-----------------------------------|:--------------------------------------|:------------------| | <code>Act Commodity GBV</code> | <code>ACT Commodities Group BV</code> | <code>1.0</code> | | <code>Act Commodity GBV</code> | <code>ACT Fuels B.V.</code> | <code>0.76</code> | | <code>Act fuel ball venture</code> | <code>ACT Fuels B.V.</code> | <code>1.0</code> | * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters: ```json { "loss_fct": "torch.nn.modules.loss.MSELoss" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 16 - `learning_rate`: 2e-05 - `num_train_epochs`: 30 - `warmup_ratio`: 0.1 #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 8 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 30 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `eval_use_gather_object`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | sts-test_spearman_cosine | |:-----:|:----:|:------------------------:| | 30.0 | 120 | 0.9250 | ### Framework Versions - Python: 3.11.9 - Sentence Transformers: 3.0.1 - Transformers: 4.44.2 - PyTorch: 2.4.0+cpu - Accelerate: 0.33.0 - Datasets: 2.21.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
omarabb315/Query-5KM-merged_2
omarabb315
2024-09-09T06:24:26Z
146
0
transformers
[ "transformers", "pytorch", "jais", "text-generation", "trl", "sft", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "region:us" ]
text-generation
2024-09-09T06:23:15Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Imask/Large_2000samples
Imask
2024-09-09T06:23:32Z
104
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:facebook/bart-large-cnn", "base_model:finetune:facebook/bart-large-cnn", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-09-09T06:22:33Z
--- library_name: transformers license: mit base_model: facebook/bart-large-cnn tags: - generated_from_trainer metrics: - rouge model-index: - name: Large_2000samples results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Large_2000samples This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6928 - Model Preparation Time: 0.0074 - Rouge1: 67.9624 - Rouge2: 52.7758 - Rougel: 54.3155 - Rougelsum: 65.1103 - Gen Len: 102.99 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 300 - num_epochs: 2 - label_smoothing_factor: 0.1 ### Training results ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
jvelja/vllm-gemma2b_7
jvelja
2024-09-09T06:22:58Z
58
0
transformers
[ "transformers", "pytorch", "safetensors", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "endpoints_compatible", "region:us" ]
reinforcement-learning
2024-09-09T06:22:55Z
--- license: apache-2.0 library_name: transformers tags: - trl - ppo - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="jvelja//tmp/tmpv6a3blp1/jvelja/vllm-gemma2b_7") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("jvelja//tmp/tmpv6a3blp1/jvelja/vllm-gemma2b_7") model = AutoModelForCausalLMWithValueHead.from_pretrained("jvelja//tmp/tmpv6a3blp1/jvelja/vllm-gemma2b_7") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
schnapper79/lumikabra-123B_v0.4-exl2-5.0bpw
schnapper79
2024-09-09T06:20:22Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "5-bit", "exl2", "region:us" ]
text-generation
2024-09-09T05:47:41Z
--- base_model: [] library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the della_linear merge method using /workspace/text-generation-webui/models/mistralai_Mistral-Large-Instruct-2407 as a base. ### Models Merged The following models were included in the merge: * /workspace/text-generation-webui/models/FluffyKaeloky_Luminum-v0.1-123B * /workspace/text-generation-webui/models/anthracite-org_magnum-v2-123b * /workspace/text-generation-webui/models/migtissera_Tess-3-Mistral-Large-2-123B ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: /workspace/text-generation-webui/models/anthracite-org_magnum-v2-123b parameters: weight: 0.25 density: 0.9 - model: /workspace/text-generation-webui/models/FluffyKaeloky_Luminum-v0.1-123B parameters: weight: 0.25 density: 0.9 - model: /workspace/text-generation-webui/models/migtissera_Tess-3-Mistral-Large-2-123B parameters: weight: 0.3 density: 0.9 merge_method: della_linear base_model: /workspace/text-generation-webui/models/mistralai_Mistral-Large-Instruct-2407 parameters: epsilon: 0.05 lambda: 1 int8_mask: true dtype: bfloat16 ```
nimishaP/distilbert-base-uncased-finetuned-emotion
nimishaP
2024-09-09T05:54:50Z
106
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-09-09T05:44:35Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2153 - Accuracy: 0.9265 - F1: 0.9266 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8289 | 1.0 | 250 | 0.3093 | 0.9045 | 0.9038 | | 0.241 | 2.0 | 500 | 0.2153 | 0.9265 | 0.9266 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
mradermacher/CodeLlama-34b-Instruct-hf-GGUF
mradermacher
2024-09-09T05:45:00Z
43
0
transformers
[ "transformers", "gguf", "llama-2", "code", "base_model:codellama/CodeLlama-34b-Instruct-hf", "base_model:quantized:codellama/CodeLlama-34b-Instruct-hf", "license:llama2", "endpoints_compatible", "region:us", "conversational" ]
null
2024-09-08T15:03:38Z
--- base_model: codellama/CodeLlama-34b-Instruct-hf language: - code library_name: transformers license: llama2 quantized_by: mradermacher tags: - llama-2 --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/codellama/CodeLlama-34b-Instruct-hf <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/CodeLlama-34b-Instruct-hf-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/CodeLlama-34b-Instruct-hf-GGUF/resolve/main/CodeLlama-34b-Instruct-hf.Q2_K.gguf) | Q2_K | 12.6 | | | [GGUF](https://huggingface.co/mradermacher/CodeLlama-34b-Instruct-hf-GGUF/resolve/main/CodeLlama-34b-Instruct-hf.IQ3_XS.gguf) | IQ3_XS | 14.0 | | | [GGUF](https://huggingface.co/mradermacher/CodeLlama-34b-Instruct-hf-GGUF/resolve/main/CodeLlama-34b-Instruct-hf.Q3_K_S.gguf) | Q3_K_S | 14.7 | | | [GGUF](https://huggingface.co/mradermacher/CodeLlama-34b-Instruct-hf-GGUF/resolve/main/CodeLlama-34b-Instruct-hf.IQ3_S.gguf) | IQ3_S | 14.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/CodeLlama-34b-Instruct-hf-GGUF/resolve/main/CodeLlama-34b-Instruct-hf.IQ3_M.gguf) | IQ3_M | 15.3 | | | [GGUF](https://huggingface.co/mradermacher/CodeLlama-34b-Instruct-hf-GGUF/resolve/main/CodeLlama-34b-Instruct-hf.Q3_K_M.gguf) | Q3_K_M | 16.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/CodeLlama-34b-Instruct-hf-GGUF/resolve/main/CodeLlama-34b-Instruct-hf.Q3_K_L.gguf) | Q3_K_L | 17.9 | | | [GGUF](https://huggingface.co/mradermacher/CodeLlama-34b-Instruct-hf-GGUF/resolve/main/CodeLlama-34b-Instruct-hf.IQ4_XS.gguf) | IQ4_XS | 18.3 | | | [GGUF](https://huggingface.co/mradermacher/CodeLlama-34b-Instruct-hf-GGUF/resolve/main/CodeLlama-34b-Instruct-hf.Q4_K_S.gguf) | Q4_K_S | 19.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/CodeLlama-34b-Instruct-hf-GGUF/resolve/main/CodeLlama-34b-Instruct-hf.Q4_K_M.gguf) | Q4_K_M | 20.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/CodeLlama-34b-Instruct-hf-GGUF/resolve/main/CodeLlama-34b-Instruct-hf.Q5_K_S.gguf) | Q5_K_S | 23.3 | | | [GGUF](https://huggingface.co/mradermacher/CodeLlama-34b-Instruct-hf-GGUF/resolve/main/CodeLlama-34b-Instruct-hf.Q5_K_M.gguf) | Q5_K_M | 23.9 | | | [GGUF](https://huggingface.co/mradermacher/CodeLlama-34b-Instruct-hf-GGUF/resolve/main/CodeLlama-34b-Instruct-hf.Q6_K.gguf) | Q6_K | 27.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/CodeLlama-34b-Instruct-hf-GGUF/resolve/main/CodeLlama-34b-Instruct-hf.Q8_0.gguf) | Q8_0 | 36.0 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/deepseek-coder-33b-instruct-GGUF
mradermacher
2024-09-09T05:38:49Z
3,152
2
transformers
[ "transformers", "gguf", "en", "base_model:deepseek-ai/deepseek-coder-33b-instruct", "base_model:quantized:deepseek-ai/deepseek-coder-33b-instruct", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
2024-09-06T17:42:06Z
--- base_model: deepseek-ai/deepseek-coder-33b-instruct language: - en library_name: transformers license: other license_link: LICENSE license_name: deepseek quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/deepseek-coder-33b-instruct-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/deepseek-coder-33b-instruct-GGUF/resolve/main/deepseek-coder-33b-instruct.Q2_K.gguf) | Q2_K | 12.5 | | | [GGUF](https://huggingface.co/mradermacher/deepseek-coder-33b-instruct-GGUF/resolve/main/deepseek-coder-33b-instruct.IQ3_XS.gguf) | IQ3_XS | 13.8 | | | [GGUF](https://huggingface.co/mradermacher/deepseek-coder-33b-instruct-GGUF/resolve/main/deepseek-coder-33b-instruct.Q3_K_S.gguf) | Q3_K_S | 14.5 | | | [GGUF](https://huggingface.co/mradermacher/deepseek-coder-33b-instruct-GGUF/resolve/main/deepseek-coder-33b-instruct.IQ3_S.gguf) | IQ3_S | 14.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/deepseek-coder-33b-instruct-GGUF/resolve/main/deepseek-coder-33b-instruct.IQ3_M.gguf) | IQ3_M | 15.1 | | | [GGUF](https://huggingface.co/mradermacher/deepseek-coder-33b-instruct-GGUF/resolve/main/deepseek-coder-33b-instruct.Q3_K_M.gguf) | Q3_K_M | 16.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/deepseek-coder-33b-instruct-GGUF/resolve/main/deepseek-coder-33b-instruct.Q3_K_L.gguf) | Q3_K_L | 17.7 | | | [GGUF](https://huggingface.co/mradermacher/deepseek-coder-33b-instruct-GGUF/resolve/main/deepseek-coder-33b-instruct.IQ4_XS.gguf) | IQ4_XS | 18.1 | | | [GGUF](https://huggingface.co/mradermacher/deepseek-coder-33b-instruct-GGUF/resolve/main/deepseek-coder-33b-instruct.Q4_K_S.gguf) | Q4_K_S | 19.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/deepseek-coder-33b-instruct-GGUF/resolve/main/deepseek-coder-33b-instruct.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/deepseek-coder-33b-instruct-GGUF/resolve/main/deepseek-coder-33b-instruct.Q5_K_S.gguf) | Q5_K_S | 23.1 | | | [GGUF](https://huggingface.co/mradermacher/deepseek-coder-33b-instruct-GGUF/resolve/main/deepseek-coder-33b-instruct.Q5_K_M.gguf) | Q5_K_M | 23.6 | | | [GGUF](https://huggingface.co/mradermacher/deepseek-coder-33b-instruct-GGUF/resolve/main/deepseek-coder-33b-instruct.Q6_K.gguf) | Q6_K | 27.5 | very good quality | | [GGUF](https://huggingface.co/mradermacher/deepseek-coder-33b-instruct-GGUF/resolve/main/deepseek-coder-33b-instruct.Q8_0.gguf) | Q8_0 | 35.5 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/claude2-alpaca-13B-i1-GGUF
mradermacher
2024-09-09T05:33:17Z
75
0
transformers
[ "transformers", "gguf", "en", "dataset:umd-zhou-lab/claude2_alpaca", "base_model:umd-zhou-lab/claude2-alpaca-13B", "base_model:quantized:umd-zhou-lab/claude2-alpaca-13B", "license:llama2", "endpoints_compatible", "region:us", "imatrix" ]
null
2024-09-09T01:42:03Z
--- base_model: umd-zhou-lab/claude2-alpaca-13B datasets: - umd-zhou-lab/claude2_alpaca language: - en library_name: transformers license: llama2 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/umd-zhou-lab/claude2-alpaca-13B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/claude2-alpaca-13B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/claude2-alpaca-13B-i1-GGUF/resolve/main/claude2-alpaca-13B.i1-IQ1_S.gguf) | i1-IQ1_S | 3.0 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/claude2-alpaca-13B-i1-GGUF/resolve/main/claude2-alpaca-13B.i1-IQ1_M.gguf) | i1-IQ1_M | 3.2 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/claude2-alpaca-13B-i1-GGUF/resolve/main/claude2-alpaca-13B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/claude2-alpaca-13B-i1-GGUF/resolve/main/claude2-alpaca-13B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/claude2-alpaca-13B-i1-GGUF/resolve/main/claude2-alpaca-13B.i1-IQ2_S.gguf) | i1-IQ2_S | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/claude2-alpaca-13B-i1-GGUF/resolve/main/claude2-alpaca-13B.i1-IQ2_M.gguf) | i1-IQ2_M | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/claude2-alpaca-13B-i1-GGUF/resolve/main/claude2-alpaca-13B.i1-Q2_K.gguf) | i1-Q2_K | 5.0 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/claude2-alpaca-13B-i1-GGUF/resolve/main/claude2-alpaca-13B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/claude2-alpaca-13B-i1-GGUF/resolve/main/claude2-alpaca-13B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/claude2-alpaca-13B-i1-GGUF/resolve/main/claude2-alpaca-13B.i1-IQ3_S.gguf) | i1-IQ3_S | 5.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/claude2-alpaca-13B-i1-GGUF/resolve/main/claude2-alpaca-13B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/claude2-alpaca-13B-i1-GGUF/resolve/main/claude2-alpaca-13B.i1-IQ3_M.gguf) | i1-IQ3_M | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/claude2-alpaca-13B-i1-GGUF/resolve/main/claude2-alpaca-13B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/claude2-alpaca-13B-i1-GGUF/resolve/main/claude2-alpaca-13B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 7.0 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/claude2-alpaca-13B-i1-GGUF/resolve/main/claude2-alpaca-13B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/claude2-alpaca-13B-i1-GGUF/resolve/main/claude2-alpaca-13B.i1-Q4_0.gguf) | i1-Q4_0 | 7.5 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/claude2-alpaca-13B-i1-GGUF/resolve/main/claude2-alpaca-13B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.5 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/claude2-alpaca-13B-i1-GGUF/resolve/main/claude2-alpaca-13B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 8.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/claude2-alpaca-13B-i1-GGUF/resolve/main/claude2-alpaca-13B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/claude2-alpaca-13B-i1-GGUF/resolve/main/claude2-alpaca-13B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 9.3 | | | [GGUF](https://huggingface.co/mradermacher/claude2-alpaca-13B-i1-GGUF/resolve/main/claude2-alpaca-13B.i1-Q6_K.gguf) | i1-Q6_K | 10.8 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
jvelja/BERT_vllm-gemma2b_6
jvelja
2024-09-09T05:31:45Z
106
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-09-09T05:31:34Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sagawa/ReactionT5v2-forward
sagawa
2024-09-09T05:19:46Z
218
3
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "chemistry", "SMILES", "product", "en", "dataset:ORD", "arxiv:2311.06708", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-07-28T07:31:04Z
--- language: - en license: mit tags: - chemistry - SMILES - product datasets: - ORD metrics: - accuracy --- # Model Card for ReactionT5v2-forward This is a ReactionT5 pre-trained to predict the products of reactions. You can use the demo [here](https://huggingface.co/spaces/sagawa/ReactionT5_task_forward). ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/sagawatatsuya/ReactionT5v2 - **Paper:** https://arxiv.org/abs/2311.06708 - **Demo:** https://huggingface.co/spaces/sagawa/ReactionT5_task_forward ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> You can use this model for forward reaction prediction or fine-tune this model with your dataset. ## How to Get Started with the Model Use the code below to get started with the model. ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("sagawa/ReactionT5v2-forward", return_tensors="pt") model = AutoModelForSeq2SeqLM.from_pretrained("sagawa/ReactionT5v2-forward") inp = tokenizer('REACTANT:COC(=O)C1=CCCN(C)C1.O.[Al+3].[H-].[Li+].[Na+].[OH-]REAGENT:C1CCOC1', return_tensors='pt') output = model.generate(**inp, num_beams=1, num_return_sequences=1, return_dict_in_generate=True, output_scores=True) output = tokenizer.decode(output['sequences'][0], skip_special_tokens=True).replace(' ', '').rstrip('.') output # 'CN1CCC=C(CO)C1' ``` ## Training Details ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> We used the [Open Reaction Database (ORD) dataset](https://drive.google.com/file/d/1fa2MyLdN1vcA7Rysk8kLQENE92YejS9B/view?usp=drive_link) for model training. In addition, we used [USPTO_MIT dataset](https://yzhang.hpc.nyu.edu/T5Chem/index.html)'s test split to prevent data leakage. The command used for training is the following. For more information about data preprocessing and training, please refer to the paper and GitHub repository. ```python cd task_forward python train.py \ --output_dir='t5' \ --epochs=100 \ --lr=1e-3 \ --batch_size=32 \ --input_max_len=150 \ --target_max_len=100 \ --weight_decay=0.01 \ --evaluation_strategy='epoch' \ --save_strategy='epoch' \ --logging_strategy='epoch' \ --train_data_path='../data/preprocessed_ord_train.csv' \ --valid_data_path='../data/preprocessed_ord_valid.csv' \ --test_data_path='../data/preprocessed_ord_test.csv' \ --USPTO_test_data_path='../data/USPTO_MIT/MIT_separated/test.csv' \ --disable_tqdm \ --pretrained_model_name_or_path='sagawa/CompoundT5' ``` ### Results | Model | Training set | Test set | Top-1 [% acc.] | Top-2 [% acc.] | Top-3 [% acc.] | Top-5 [% acc.] | |----------------------|---------------------------|----------|----------------|----------------|----------------|----------------| | Sequence-to-sequence | USPTO_MIT | USPTO_MIT | 80.3 | 84.7 | 86.2 | 87.5 | | WLDN | USPTO_MIT | USPTO_MIT | 80.6 (85.6) | 90.5 | 92.8 | 93.4 | | Molecular Transformer| USPTO_MIT | USPTO_MIT | 88.8 | 92.6 | – | 94.4 | | T5Chem | USPTO_MIT | USPTO_MIT | 90.4 | 94.2 | – | 96.4 | | CompoundT5 | USPTO_MIT | USPTO_MIT | 86.6 | 89.5 | 90.4 | 91.2 | | [ReactionT5 (This model)](https://huggingface.co/sagawa/ReactionT5v2-forward) | - | USPTO_MIT | 92.8 | 95.6 | 96.4 | 97.1 | | [ReactionT5](https://huggingface.co/sagawa/ReactionT5v2-forward-USPTO_MIT) | USPTO_MIT | USPTO_MIT | 97.5 | 98.6 | 98.8 | 99.0 | Performance comparison of Compound T5, ReactionT5, and other models in product prediction. ## Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> arxiv link: https://arxiv.org/abs/2311.06708 ``` @misc{sagawa2023reactiont5, title={ReactionT5: a large-scale pre-trained model towards application of limited reaction data}, author={Tatsuya Sagawa and Ryosuke Kojima}, year={2023}, eprint={2311.06708}, archivePrefix={arXiv}, primaryClass={physics.chem-ph} } ```
legraphista/Reflection-Llama-3.1-70B-IMat-GGUF
legraphista
2024-09-09T05:17:06Z
466
8
gguf
[ "gguf", "quantized", "GGUF", "quantization", "imat", "imatrix", "static", "16bit", "8bit", "6bit", "5bit", "4bit", "3bit", "2bit", "1bit", "text-generation", "base_model:mattshumer/Reflection-Llama-3.1-70B", "base_model:quantized:mattshumer/Reflection-Llama-3.1-70B", "license:llama3.1", "region:us", "conversational" ]
text-generation
2024-09-06T01:30:35Z
--- base_model: mattshumer/Reflection-Llama-3.1-70B inference: false library_name: gguf license: llama3.1 pipeline_tag: text-generation quantized_by: legraphista tags: - quantized - GGUF - quantization - imat - imatrix - static - 16bit - 8bit - 6bit - 5bit - 4bit - 3bit - 2bit - 1bit --- >[!WARNING] >Potentailly broken weights, see https://x.com/mattshumer_/status/1832424499054309804?s=46 >Once resolved, this message will go away # Reflection-Llama-3.1-70B-IMat-GGUF _Llama.cpp imatrix quantization of mattshumer/Reflection-Llama-3.1-70B_ Original Model: [mattshumer/Reflection-Llama-3.1-70B](https://huggingface.co/mattshumer/Reflection-Llama-3.1-70B) Original dtype: `FP32` (`float32`) Quantized by: llama.cpp [b3671](https://github.com/ggerganov/llama.cpp/releases/tag/b3671) IMatrix dataset: [here](https://gist.githubusercontent.com/bartowski1182/eb213dccb3571f863da82e99418f81e8/raw/b2869d80f5c16fd7082594248e80144677736635/calibration_datav3.txt) - [Files](#files) - [IMatrix](#imatrix) - [Common Quants](#common-quants) - [All Quants](#all-quants) - [Downloading using huggingface-cli](#downloading-using-huggingface-cli) - [Inference](#inference) - [Chat template with system prompt](#chat-template-with-system-prompt) - [Llama.cpp](#llama-cpp) - [FAQ](#faq) - [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere) - [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf) --- ## Files ### IMatrix Status: ✅ Available Link: [here](https://huggingface.co/legraphista/Reflection-Llama-3.1-70B-IMat-GGUF/blob/main/imatrix.dat) ### Common Quants | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | | -------- | ---------- | --------- | ------ | ------------ | -------- | | [Reflection-Llama-3.1-70B.Q8_0/*](https://huggingface.co/legraphista/Reflection-Llama-3.1-70B-IMat-GGUF/tree/main/Reflection-Llama-3.1-70B.Q8_0) | Q8_0 | 74.98GB | ✅ Available | ⚪ Static | ✂ Yes | [Reflection-Llama-3.1-70B.Q6_K/*](https://huggingface.co/legraphista/Reflection-Llama-3.1-70B-IMat-GGUF/tree/main/Reflection-Llama-3.1-70B.Q6_K) | Q6_K | 57.89GB | ✅ Available | ⚪ Static | ✂ Yes | [Reflection-Llama-3.1-70B.Q4_K.gguf](https://huggingface.co/legraphista/Reflection-Llama-3.1-70B-IMat-GGUF/blob/main/Reflection-Llama-3.1-70B.Q4_K.gguf) | Q4_K | 42.52GB | ✅ Available | 🟢 IMatrix | 📦 No | [Reflection-Llama-3.1-70B.Q3_K.gguf](https://huggingface.co/legraphista/Reflection-Llama-3.1-70B-IMat-GGUF/blob/main/Reflection-Llama-3.1-70B.Q3_K.gguf) | Q3_K | 34.27GB | ✅ Available | 🟢 IMatrix | 📦 No | [Reflection-Llama-3.1-70B.Q2_K.gguf](https://huggingface.co/legraphista/Reflection-Llama-3.1-70B-IMat-GGUF/blob/main/Reflection-Llama-3.1-70B.Q2_K.gguf) | Q2_K | 26.38GB | ✅ Available | 🟢 IMatrix | 📦 No ### All Quants | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | | -------- | ---------- | --------- | ------ | ------------ | -------- | | [Reflection-Llama-3.1-70B.BF16/*](https://huggingface.co/legraphista/Reflection-Llama-3.1-70B-IMat-GGUF/tree/main/Reflection-Llama-3.1-70B.BF16) | BF16 | 141.12GB | ✅ Available | ⚪ Static | ✂ Yes | [Reflection-Llama-3.1-70B.FP16/*](https://huggingface.co/legraphista/Reflection-Llama-3.1-70B-IMat-GGUF/tree/main/Reflection-Llama-3.1-70B.FP16) | F16 | 141.12GB | ✅ Available | ⚪ Static | ✂ Yes | [Reflection-Llama-3.1-70B.Q8_0/*](https://huggingface.co/legraphista/Reflection-Llama-3.1-70B-IMat-GGUF/tree/main/Reflection-Llama-3.1-70B.Q8_0) | Q8_0 | 74.98GB | ✅ Available | ⚪ Static | ✂ Yes | [Reflection-Llama-3.1-70B.Q6_K/*](https://huggingface.co/legraphista/Reflection-Llama-3.1-70B-IMat-GGUF/tree/main/Reflection-Llama-3.1-70B.Q6_K) | Q6_K | 57.89GB | ✅ Available | ⚪ Static | ✂ Yes | [Reflection-Llama-3.1-70B.Q5_K/*](https://huggingface.co/legraphista/Reflection-Llama-3.1-70B-IMat-GGUF/tree/main/Reflection-Llama-3.1-70B.Q5_K) | Q5_K | 49.95GB | ✅ Available | ⚪ Static | ✂ Yes | [Reflection-Llama-3.1-70B.Q5_K_S/*](https://huggingface.co/legraphista/Reflection-Llama-3.1-70B-IMat-GGUF/tree/main/Reflection-Llama-3.1-70B.Q5_K_S) | Q5_K_S | 48.66GB | ✅ Available | ⚪ Static | ✂ Yes | [Reflection-Llama-3.1-70B.Q4_K.gguf](https://huggingface.co/legraphista/Reflection-Llama-3.1-70B-IMat-GGUF/blob/main/Reflection-Llama-3.1-70B.Q4_K.gguf) | Q4_K | 42.52GB | ✅ Available | 🟢 IMatrix | 📦 No | [Reflection-Llama-3.1-70B.Q4_K_S.gguf](https://huggingface.co/legraphista/Reflection-Llama-3.1-70B-IMat-GGUF/blob/main/Reflection-Llama-3.1-70B.Q4_K_S.gguf) | Q4_K_S | 40.35GB | ✅ Available | 🟢 IMatrix | 📦 No | [Reflection-Llama-3.1-70B.IQ4_NL.gguf](https://huggingface.co/legraphista/Reflection-Llama-3.1-70B-IMat-GGUF/blob/main/Reflection-Llama-3.1-70B.IQ4_NL.gguf) | IQ4_NL | 40.05GB | ✅ Available | 🟢 IMatrix | 📦 No | [Reflection-Llama-3.1-70B.IQ4_XS.gguf](https://huggingface.co/legraphista/Reflection-Llama-3.1-70B-IMat-GGUF/blob/main/Reflection-Llama-3.1-70B.IQ4_XS.gguf) | IQ4_XS | 37.90GB | ✅ Available | 🟢 IMatrix | 📦 No | [Reflection-Llama-3.1-70B.Q3_K.gguf](https://huggingface.co/legraphista/Reflection-Llama-3.1-70B-IMat-GGUF/blob/main/Reflection-Llama-3.1-70B.Q3_K.gguf) | Q3_K | 34.27GB | ✅ Available | 🟢 IMatrix | 📦 No | [Reflection-Llama-3.1-70B.Q3_K_L.gguf](https://huggingface.co/legraphista/Reflection-Llama-3.1-70B-IMat-GGUF/blob/main/Reflection-Llama-3.1-70B.Q3_K_L.gguf) | Q3_K_L | 37.14GB | ✅ Available | 🟢 IMatrix | 📦 No | [Reflection-Llama-3.1-70B.Q3_K_S.gguf](https://huggingface.co/legraphista/Reflection-Llama-3.1-70B-IMat-GGUF/blob/main/Reflection-Llama-3.1-70B.Q3_K_S.gguf) | Q3_K_S | 30.91GB | ✅ Available | 🟢 IMatrix | 📦 No | [Reflection-Llama-3.1-70B.IQ3_M.gguf](https://huggingface.co/legraphista/Reflection-Llama-3.1-70B-IMat-GGUF/blob/main/Reflection-Llama-3.1-70B.IQ3_M.gguf) | IQ3_M | 31.94GB | ✅ Available | 🟢 IMatrix | 📦 No | [Reflection-Llama-3.1-70B.IQ3_S.gguf](https://huggingface.co/legraphista/Reflection-Llama-3.1-70B-IMat-GGUF/blob/main/Reflection-Llama-3.1-70B.IQ3_S.gguf) | IQ3_S | 30.91GB | ✅ Available | 🟢 IMatrix | 📦 No | [Reflection-Llama-3.1-70B.IQ3_XS.gguf](https://huggingface.co/legraphista/Reflection-Llama-3.1-70B-IMat-GGUF/blob/main/Reflection-Llama-3.1-70B.IQ3_XS.gguf) | IQ3_XS | 29.31GB | ✅ Available | 🟢 IMatrix | 📦 No | [Reflection-Llama-3.1-70B.IQ3_XXS.gguf](https://huggingface.co/legraphista/Reflection-Llama-3.1-70B-IMat-GGUF/blob/main/Reflection-Llama-3.1-70B.IQ3_XXS.gguf) | IQ3_XXS | 27.47GB | ✅ Available | 🟢 IMatrix | 📦 No | [Reflection-Llama-3.1-70B.Q2_K.gguf](https://huggingface.co/legraphista/Reflection-Llama-3.1-70B-IMat-GGUF/blob/main/Reflection-Llama-3.1-70B.Q2_K.gguf) | Q2_K | 26.38GB | ✅ Available | 🟢 IMatrix | 📦 No | [Reflection-Llama-3.1-70B.Q2_K_S.gguf](https://huggingface.co/legraphista/Reflection-Llama-3.1-70B-IMat-GGUF/blob/main/Reflection-Llama-3.1-70B.Q2_K_S.gguf) | Q2_K_S | 24.47GB | ✅ Available | 🟢 IMatrix | 📦 No | [Reflection-Llama-3.1-70B.IQ2_M.gguf](https://huggingface.co/legraphista/Reflection-Llama-3.1-70B-IMat-GGUF/blob/main/Reflection-Llama-3.1-70B.IQ2_M.gguf) | IQ2_M | 24.12GB | ✅ Available | 🟢 IMatrix | 📦 No | [Reflection-Llama-3.1-70B.IQ2_S.gguf](https://huggingface.co/legraphista/Reflection-Llama-3.1-70B-IMat-GGUF/blob/main/Reflection-Llama-3.1-70B.IQ2_S.gguf) | IQ2_S | 22.24GB | ✅ Available | 🟢 IMatrix | 📦 No | [Reflection-Llama-3.1-70B.IQ2_XS.gguf](https://huggingface.co/legraphista/Reflection-Llama-3.1-70B-IMat-GGUF/blob/main/Reflection-Llama-3.1-70B.IQ2_XS.gguf) | IQ2_XS | 21.14GB | ✅ Available | 🟢 IMatrix | 📦 No | [Reflection-Llama-3.1-70B.IQ2_XXS.gguf](https://huggingface.co/legraphista/Reflection-Llama-3.1-70B-IMat-GGUF/blob/main/Reflection-Llama-3.1-70B.IQ2_XXS.gguf) | IQ2_XXS | 19.10GB | ✅ Available | 🟢 IMatrix | 📦 No | [Reflection-Llama-3.1-70B.IQ1_M.gguf](https://huggingface.co/legraphista/Reflection-Llama-3.1-70B-IMat-GGUF/blob/main/Reflection-Llama-3.1-70B.IQ1_M.gguf) | IQ1_M | 16.75GB | ✅ Available | 🟢 IMatrix | 📦 No | [Reflection-Llama-3.1-70B.IQ1_S.gguf](https://huggingface.co/legraphista/Reflection-Llama-3.1-70B-IMat-GGUF/blob/main/Reflection-Llama-3.1-70B.IQ1_S.gguf) | IQ1_S | 15.34GB | ✅ Available | 🟢 IMatrix | 📦 No ## Downloading using huggingface-cli If you do not have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Download the specific file you want: ``` huggingface-cli download legraphista/Reflection-Llama-3.1-70B-IMat-GGUF --include "Reflection-Llama-3.1-70B.Q8_0.gguf" --local-dir ./ ``` If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download legraphista/Reflection-Llama-3.1-70B-IMat-GGUF --include "Reflection-Llama-3.1-70B.Q8_0/*" --local-dir ./ # see FAQ for merging GGUF's ``` --- ## Inference > [!IMPORTANT] > Make sure to set the system prompt: > `You are a world-class AI system, capable of complex reasoning and reflection. Reason through the query inside <thinking> tags, and then provide your final response inside <output> tags. If you detect that you made a mistake in your reasoning at any point, correct yourself inside <reflection> tags.` ### Chat template with system prompt ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> You are a world-class AI system, capable of complex reasoning and reflection. Reason through the query inside <thinking> tags, and then provide your final response inside <output> tags. If you detect that you made a mistake in your reasoning at any point, correct yourself inside <reflection> tags.<|eot_id|><|start_header_id|>user<|end_header_id|> {user_prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|> {assistant_response}<|eot_id|><|start_header_id|>user<|end_header_id|> {next_user_prompt}<|eot_id|> ``` ### Llama.cpp ``` llama.cpp/main -m Reflection-Llama-3.1-70B.Q8_0.gguf --color -i -p "prompt here (according to the chat template)" ``` --- ## FAQ ### Why is the IMatrix not applied everywhere? According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results). ### How do I merge a split GGUF? 1. Make sure you have `gguf-split` available - To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases - Download the appropriate zip for your system from the latest release - Unzip the archive and you should be able to find `gguf-split` 2. Locate your GGUF chunks folder (ex: `Reflection-Llama-3.1-70B.Q8_0`) 3. Run `gguf-split --merge Reflection-Llama-3.1-70B.Q8_0/Reflection-Llama-3.1-70B.Q8_0-00001-of-XXXXX.gguf Reflection-Llama-3.1-70B.Q8_0.gguf` - Make sure to point `gguf-split` to the first chunk of the split. --- Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)!
benayas/bert-base-uncased-tweet_eval-v0
benayas
2024-09-09T05:11:04Z
106
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-09-09T05:10:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Gaojunyao/StyleShot
Gaojunyao
2024-09-09T04:56:13Z
3
7
diffusers
[ "diffusers", "text-to-image", "style-transfer", "stable-diffusion", "en", "arxiv:2407.01414", "license:mit", "region:us" ]
text-to-image
2024-05-08T15:18:13Z
--- tags: - text-to-image - style-transfer - stable-diffusion language: - en library_name: diffusers license: mit --- # StyleShot Model Card <div align="center"> [**Project Page**](https://styleshot.github.io) **|** [**Paper (ArXiv)**](https://arxiv.org/abs/2407.01414) **|** [**Code**](https://github.com/open-mmlab/StyleShot) </div> --- ## Introduction we present StyleShot, a generalized plug-to-play style transfer method, capable of generating the high-quality stylized images that match the desired style from any reference image without test-time style-tuning. To the best of our knowledge, To the best of our knowledge, StyleShot is the first work to designate a style-aware encoder based on Stable Diffusion and a content-fusion encoder for better style and content integration, achieving the state-of-the-art text and image-driven style transfer performance compared to existing methods. ![arch](./framework.png) ## Models ### StyleShot for SD 1.5 - [ip.bin](https://huggingface.co/Gaojunyao/StyleShot/blob/main/pretrained_weight/ip.bin): weights for ip-adapter and our content-retention encoder. - [style_aware_encoder.bin](https://huggingface.co/Gaojunyao/StyleShot/blob/main/pretrained_weight/style_aware_encoder.bin): weights for style-aware encoder. ## Disclaimer We develop this repository for RESEARCH purposes, so it can only be used for personal/research/non-commercial purposes.
rendchevi/deberta-v3-base_goemospec_v0.1
rendchevi
2024-09-09T04:54:41Z
163
0
transformers
[ "transformers", "safetensors", "deberta-v2", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-09-09T04:54:07Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jvelja/BERT_vllm-gemma2b-deterministic_5
jvelja
2024-09-09T04:53:22Z
105
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-09-09T04:53:10Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jvelja/vllm-gemma2b_5
jvelja
2024-09-09T04:41:03Z
59
0
transformers
[ "transformers", "pytorch", "safetensors", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "endpoints_compatible", "region:us" ]
reinforcement-learning
2024-09-09T04:41:00Z
--- license: apache-2.0 library_name: transformers tags: - trl - ppo - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="jvelja//tmp/tmpwhreqsm_/jvelja/vllm-gemma2b_5") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("jvelja//tmp/tmpwhreqsm_/jvelja/vllm-gemma2b_5") model = AutoModelForCausalLMWithValueHead.from_pretrained("jvelja//tmp/tmpwhreqsm_/jvelja/vllm-gemma2b_5") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
jvelja/BERT_vllm-gemma2b_5
jvelja
2024-09-09T04:41:00Z
106
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-09-09T04:40:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Harshkmr/smollm-135-multilingual
Harshkmr
2024-09-09T04:39:37Z
20
0
null
[ "tensorboard", "safetensors", "llama", "generated_from_trainer", "base_model:Harshkmr/smollm-135-multilingual", "base_model:finetune:Harshkmr/smollm-135-multilingual", "license:apache-2.0", "region:us" ]
null
2024-09-08T10:18:10Z
--- license: apache-2.0 base_model: Harshkmr/smollm-135-multilingual tags: - generated_from_trainer model-index: - name: smollm-135-multilingual results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smollm-135-multilingual This model is a fine-tuned version of [Harshkmr/smollm-135-multilingual](https://huggingface.co/Harshkmr/smollm-135-multilingual) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.44.0 - Pytorch 2.4.0 - Datasets 2.21.0 - Tokenizers 0.19.1
claudiubarbu/dpo
claudiubarbu
2024-09-09T04:38:35Z
15
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "trl", "dpo", "generated_from_trainer", "dataset:piqa", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-08-30T09:46:26Z
--- library_name: transformers license: mit base_model: openai-community/gpt2 tags: - trl - dpo - generated_from_trainer datasets: - piqa model-index: - name: HW2-dpo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # HW2-dpo This model is a fine-tuned version of [openai-community/gpt2](https://huggingface.co/openai-community/gpt2) on the piqa dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.44.2 - Pytorch 2.0.0 - Datasets 2.16.1 - Tokenizers 0.19.1
ThomasComics/MN-12B-Chronos-Gold-Celeste-v1
ThomasComics
2024-09-09T04:33:58Z
5
3
null
[ "safetensors", "mistral", "merge", "mergekit", "lazymergekit", "nothingiisreal/MN-12B-Celeste-V1.9", "elinas/Chronos-Gold-12B-1.0", "license:apache-2.0", "region:us" ]
null
2024-09-09T04:31:51Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - nothingiisreal/MN-12B-Celeste-V1.9 - elinas/Chronos-Gold-12B-1.0 --- # MN-12B-Chronos-Gold-Celeste-v1 MN-12B-Chronos-Gold-Celeste-v1 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [nothingiisreal/MN-12B-Celeste-V1.9](https://huggingface.co/nothingiisreal/MN-12B-Celeste-V1.9) * [elinas/Chronos-Gold-12B-1.0](https://huggingface.co/elinas/Chronos-Gold-12B-1.0) ## 🧩 Configuration ```yaml base_model: elinas/Chronos-Gold-12B-1.0 parameters: int8_mask: true rescale: true normalize: false merge_method: della dtype: bfloat16 models: - model: nothingiisreal/MN-12B-Celeste-V1.9 parameters: density: [0.4, 0.5, 0.6, 0.4, 0.6, 0.5, 0.4] epsilon: [0.15, 0.15, 0.25, 0.15, 0.15] lambda: 0.85 weight: [0.6, 0.5, 0.4, 0.6, 0.4, 0.5, 0.6] - model: elinas/Chronos-Gold-12B-1.0 parameters: density: [0.45, 0.55, 0.45, 0.55, 0.45] epsilon: [0.1, 0.1, 0.25, 0.1, 0.1] lambda: 0.85 weight: [0.55, 0.45, 0.55, 0.45, 0.55] ```
QuantFactory/MN-12B-Lyra-v4-GGUF
QuantFactory
2024-09-09T04:18:30Z
209
6
null
[ "gguf", "en", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-09-09T03:05:08Z
--- license: cc-by-nc-4.0 language: - en --- ![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ) # QuantFactory/MN-12B-Lyra-v4-GGUF This is quantized version of [Sao10K/MN-12B-Lyra-v4](https://huggingface.co/Sao10K/MN-12B-Lyra-v4) created using llama.cpp # Original Model Card ![Lyra](https://huggingface.co/Sao10K/MN-12B-Lyra-v4/resolve/main/lyra.png) Mistral-NeMo-12B-Lyra-v4, a variation of [Lyra-v4a1](https://huggingface.co/Sao10K/MN-12B-Lyra-v4a1), layered over [Lyra-v3](https://huggingface.co/Sao10K/MN-12B-Lyra-v3), which was built on top of [Lyra-v2a2](https://huggingface.co/Sao10K/MN-12B-Lyra-v2a2), which itself was built upon [Lyra-v2a1](https://huggingface.co/Sao10K/MN-12B-Lyra-v2a1). # Model Versioning ``` [See Previous Models] | Lyra-v4a1 | ------------> Lyra-v4 [Seperate RL Step targeting Instruct and Coherency over Base Nemo instead of SFT First, Result is Merged with Lyra-v4a1, fixes most quant-based issues. Somehow.] ``` # This uses ChatML, or any of its variants which were included in previous versions. ``` <|im_start|>system This is the system prompt.<|im_end|> <|im_start|>user Instructions placed here.<|im_end|> <|im_start|>assistant The model's response will be here.<|im_end|> -------------------------------------------------- [INST]system This is another system prompt.[/INST] [INST]user Your instructions placed here.[/INST] [INST]assistant The model's response will be here.[/INST] ``` # Recommended Samplers: ``` Temperature: 0.6 - 1 # Make sure min_p is set before Temperature in Sampler Orders min_p: 0.1 - 0.2 # Crucial for NeMo ``` # Recommended Stopping Strings: ``` <|im_end|> </s> [/INST] ``` # Notes \- I think I fixed the extra token stuff some users seem to be facing, while retaining everything else? It's some error alright. <br>\- If you're using XML tags, you may see weird malformed stopping strings. Just add them to your current list. and move on. <br>\- Its pretty nice, imo. I've been messing around with it a lot. <br>\- Make sure the ChatML template is correct, I think there's some issues with the one used in SillyTavern which might cause improper replies?