modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-28 00:40:13
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
500 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-28 00:36:54
card
stringlengths
11
1.01M
DoctorCrime/Ksusha
DoctorCrime
2024-05-22T11:00:46Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-22T11:00:46Z
--- license: apache-2.0 ---
anzeo/lora_fine_tuned_cb_sloberta
anzeo
2024-05-22T11:00:09Z
1
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:EMBEDDIA/sloberta", "base_model:adapter:EMBEDDIA/sloberta", "license:cc-by-sa-4.0", "region:us" ]
null
2024-05-22T10:34:17Z
--- license: cc-by-sa-4.0 library_name: peft tags: - generated_from_trainer base_model: EMBEDDIA/sloberta metrics: - accuracy - f1 model-index: - name: lora_fine_tuned_cb_sloberta results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lora_fine_tuned_cb_sloberta This model is a fine-tuned version of [EMBEDDIA/sloberta](https://huggingface.co/EMBEDDIA/sloberta) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4606 - Accuracy: 0.3182 - F1: 0.1536 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 400 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-------:|:----:|:---------------:|:--------:|:------:| | 0.932 | 3.5714 | 50 | 1.2096 | 0.3182 | 0.1536 | | 0.745 | 7.1429 | 100 | 1.3888 | 0.3182 | 0.1536 | | 0.743 | 10.7143 | 150 | 1.4293 | 0.3182 | 0.1536 | | 0.6881 | 14.2857 | 200 | 1.4559 | 0.3182 | 0.1536 | | 0.7204 | 17.8571 | 250 | 1.4635 | 0.3182 | 0.1536 | | 0.7244 | 21.4286 | 300 | 1.4588 | 0.3182 | 0.1536 | | 0.6949 | 25.0 | 350 | 1.4588 | 0.3182 | 0.1536 | | 0.7195 | 28.5714 | 400 | 1.4606 | 0.3182 | 0.1536 | ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.40.1 - Pytorch 2.1.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
ilhemhmz752/Llama-2-AgroBot
ilhemhmz752
2024-05-22T10:53:40Z
0
0
peft
[ "peft", "region:us" ]
null
2024-05-22T10:53:21Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0
Heimat24/vhs_burghausen_e5-6-2
Heimat24
2024-05-22T10:47:43Z
6
0
sentence-transformers
[ "sentence-transformers", "safetensors", "xlm-roberta", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-05-22T10:45:47Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 30 with parameters: ``` {'batch_size': 10, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 2, "evaluation_steps": 50, "evaluator": "sentence_transformers.evaluation.InformationRetrievalEvaluator.InformationRetrievalEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 6, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
RichardErkhov/maywell_-_Llama-3-Synatra-11B-v1-4bits
RichardErkhov
2024-05-22T10:47:01Z
76
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-22T10:18:14Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Llama-3-Synatra-11B-v1 - bnb 4bits - Model creator: https://huggingface.co/maywell/ - Original model: https://huggingface.co/maywell/Llama-3-Synatra-11B-v1/ Original model description: --- license: other license_name: llama3 base_model: - kuotient/Llama-3-11B-Instruct-attenuated --- # Synatra-11B-L3-v1 ## Model Description Llama 3 11B attenuated 모델에 40만개 이상의 한국어, 영어 채팅 데이터를 학습시킨 모델입니다. More Details Soon. 채팅 템플릿은 라마3 Chat 형식을 따릅니다. ## License https://llama.meta.com/llama3/license/ ## Thanks to - 기반 모델을 제공해주신, [Jisoo Kim (kuotient)](https://huggingface.co/kuotient) - A100 클러스터를 제공해주신, [Sionic AI](https://sionic.ai/) ## Contact - [Discord Server Link](https://discord.gg/MrBt3PXdXc)
Thodns/openai-whisper-large-colab
Thodns
2024-05-22T10:44:08Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-22T10:43:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Heimat24/vhs_burghausen_e5-6-10
Heimat24
2024-05-22T10:44:02Z
6
0
sentence-transformers
[ "sentence-transformers", "safetensors", "xlm-roberta", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-05-22T10:42:30Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 30 with parameters: ``` {'batch_size': 10, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 50, "evaluator": "sentence_transformers.evaluation.InformationRetrievalEvaluator.InformationRetrievalEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 30, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
theglassofwater/preprocessed_finetuned
theglassofwater
2024-05-22T10:42:58Z
196
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T10:42:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/beberik_-_Nyxene-v2-11B-4bits
RichardErkhov
2024-05-22T10:41:29Z
76
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-22T10:30:50Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Nyxene-v2-11B - bnb 4bits - Model creator: https://huggingface.co/beberik/ - Original model: https://huggingface.co/beberik/Nyxene-v2-11B/ Original model description: --- license: cc-by-nc-4.0 tags: - merge model-index: - name: Nyxene-v2-11B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 67.41 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v2-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 84.54 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v2-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 65.26 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v2-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 55.62 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v2-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 79.56 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v2-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 54.66 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v2-11B name: Open LLM Leaderboard --- ## Description This repo contains bf16 files of Nyxene-v2-11B. It feels like with the new models, 1% is no longer needed as in the [previous version](https://huggingface.co/beberik/Nyxene-v1-11B). And yes, new version. Again. ## Model used - [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) - [openaccess-ai-collective/DPOpenHermes-7B](https://huggingface.co/openaccess-ai-collective/DPOpenHermes-7B) - [fblgit/fblgit/una-cybertron-7b-v2](https://huggingface.co/fblgit/una-cybertron-7b-v2-bf16) - [chargoddard/loyal-piano-m7-cdpo](https://huggingface.co/chargoddard/loyal-piano-m7-cdpo) ## Prompt template The best one after further testing is this one: ``` <|system|> Below is an instruction that describes a task. Write a response that appropriately completes the request. <|user|> {prompt} <|assistant|> ``` ## The secret sauce loyal-piano-cybertron-11B : ``` slices: - sources: - model: fblgit/una-cybertron-7b-v2 layer_range: [0, 24] - sources: - model: chargoddard/loyal-piano-m7-cdpo layer_range: [8, 32] merge_method: passthrough dtype: bfloat16 ``` Starling-DPOHermes-11B : ``` slices: - sources: - model: berkeley-nest/Starling-LM-7B-alpha layer_range: [0, 24] - sources: - model: openaccess-ai-collective/DPOpenHermes-7B layer_range: [8, 32] merge_method: passthrough dtype: bfloat16 ``` Nyxene-11B : ``` slices: - sources: - model: loyal-piano-cybertron-11B layer_range: [0, 48] - model: Starling-NeuralHermes-11B layer_range: [0, 48] merge_method: slerp base_model: loyal-piano-cybertron-11B parameters: t: - filter: lm_head value: [0.75] - filter: embed_tokens value: [0.75] - filter: self_attn value: [0.75, 0.25] - filter: mlp value: [0.25, 0.75] - filter: layernorm value: [0.5, 0.5] - filter: modelnorm value: [0.75] - value: 0.5 # fallback for rest of tensors dtype: bfloat16 ``` I use [mergekit](https://github.com/cg123/mergekit) for all the manipulation told here. Thanks to the [Undi95](https://huggingface.co/Undi95) for the original [11B mistral merge](https://huggingface.co/Undi95/Mistral-11B-OmniMix) recipe. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_beberik__Nyxene-v2-11B) | Metric |Value| |---------------------------------|----:| |Avg. |67.84| |AI2 Reasoning Challenge (25-Shot)|67.41| |HellaSwag (10-Shot) |84.54| |MMLU (5-Shot) |65.26| |TruthfulQA (0-shot) |55.62| |Winogrande (5-shot) |79.56| |GSM8k (5-shot) |54.66|
sj21867/ai_art_exp2_vit_realism
sj21867
2024-05-22T10:36:55Z
216
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-22T10:35:41Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer metrics: - accuracy model-index: - name: ai_art_exp2_vit_realism results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ai_art_exp2_vit_realism This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Accuracy: {'accuracy': 0.8966666666666666} - Loss: 0.8484 - Overall Accuracy: 0.8967 - Human Accuracy: 0.72 - Ld Accuracy: 0.99 - Sd Accuracy: 0.98 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | Overall Accuracy | Human Accuracy | Ld Accuracy | Sd Accuracy | |:-------------:|:-----:|:----:|:--------------------------------:|:---------------:|:----------------:|:--------------:|:-----------:|:-----------:| | 1.0272 | 0.96 | 18 | {'accuracy': 0.9033333333333333} | 0.8552 | 0.9033 | 0.7835 | 0.9640 | 0.9565 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
dayyass/universal-sentence-encoder-multilingual-large-3-pytorch
dayyass
2024-05-22T10:36:47Z
0
15
sentence-transformers
[ "sentence-transformers", "onnx", "mteb", "Sentence Transformers", "sentence-similarity", "arxiv:1803.11175", "arxiv:1907.04307", "feature-extraction", "en", "ru", "ar", "zh", "fr", "de", "it", "ja", "ko", "nl", "pl", "pt", "es", "th", "tr", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-21T20:45:03Z
--- license: mit language: - en - ru - ar - zh - fr - de - it - ja - ko - nl - pl - pt - es - th - tr library_name: sentence-transformers pipeline_tag: feature-extraction tags: - mteb - Sentence Transformers - sentence-similarity - arxiv:1803.11175 - arxiv:1907.04307 --- # Convert MUSE from TensorFlow to PyTorch This repository contains code to use mUSE (Multilingual Universal Sentence Encoder) transformer model from [TF Hub](https://www.kaggle.com/models/google/universal-sentence-encoder/tensorFlow2/multilingual-large) using **PyTorch**. > [!IMPORTANT] > **The PyTorch model can be used not only for inference, but also for additional training and fine-tuning!** Read more about the project: [GitHub](https://github.com/dayyass/muse_tf2pt/tree/main). # Usage The model is available in [HF Models](https://huggingface.co/dayyass/universal-sentence-encoder-multilingual-large-3-pytorch/tree/main) directly through `torch` (*currently, without native support from the `transformers` library*). Model initialization and usage code: ```python import torch from functools import partial from architecture import MUSE from tokenizer import get_tokenizer, tokenize PATH_TO_PT_MODEL = "model.pt" PATH_TO_TF_MODEL = "universal-sentence-encoder-multilingual-large-3" tokenizer = get_tokenizer(PATH_TO_TF_MODEL) tokenize = partial(tokenize, tokenizer=tokenizer) model_torch = MUSE( num_embeddings=128010, embedding_dim=512, d_model=512, num_heads=8, ) model_torch.load_state_dict( torch.load(PATH_TO_PT_MODEL) ) sentence = "Hello, world!" res = model_torch(tokenize(sentence)) ``` > [!NOTE] > Currently, the checkpoint of the original TF Hub model is used for tokenization, so it is loaded in the code above.
sj21867/ai_art_exp2_vit_impressionism
sj21867
2024-05-22T10:35:39Z
254
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-22T10:34:24Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer metrics: - accuracy model-index: - name: ai_art_exp2_vit_impressionism results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ai_art_exp2_vit_impressionism This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Accuracy: {'accuracy': 0.9366666666666666} - Loss: 0.6359 - Overall Accuracy: 0.9367 - Human Accuracy: 0.82 - Ld Accuracy: 0.99 - Sd Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | Overall Accuracy | Human Accuracy | Ld Accuracy | Sd Accuracy | |:-------------:|:-----:|:----:|:--------------------------------:|:---------------:|:----------------:|:--------------:|:-----------:|:-----------:| | 0.9363 | 0.96 | 18 | {'accuracy': 0.9266666666666666} | 0.6463 | 0.9267 | 0.7857 | 0.9901 | 1.0 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
icyhandkerchief/nana7miVR_so-vits-svc-4.1
icyhandkerchief
2024-05-22T10:35:13Z
0
2
null
[ "region:us" ]
null
2024-05-10T20:25:39Z
声明:本作品仅作为娱乐目的发布,可能造成的后果与使用的音声转换项目的作者、贡献者无关。 如有疑惑,或是有侵权可能,请咨询或联系作者 https://space.bilibili.com/3461578995272151 七海Nana7mi、七海与阿梓的混合模型以及参数,适用于so-vits-svc项目推理。 声音来源:https://space.bilibili.com/434334701 七海 https://space.bilibili.com/7706705 阿梓 人声数据集来源:https://space.bilibili.com/5859321 七海模型来源:自制 阿梓模型来源:https://huggingface.co/RefirmSaky/soVITS-4.1_Azi-Singer_Model 项目:https://github.com/svc-develop-team/so-vits-svc 使用的镜像:https://www.codewithgpu.com/i/svc-develop-team/so-vits-svc/so-vits-svc-4.1-Stable/376/14.2 模型融合、训练、推理等环节均使用该镜像下的webUI完成。主模型使用压缩工具压缩,仅能用于推理。可视情况加载扩散模型。 示例音频来源:https://www.bilibili.com/video/BV1pG4y1u7Zr/ 参数文件同时适用于两种模型。
evioalpo/rl_course_defend_the_center
evioalpo
2024-05-22T10:35:01Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-05-22T10:34:54Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_defend_the_center type: doom_defend_the_center metrics: - type: mean_reward value: 20.80 +/- 1.99 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_defend_the_center** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r evioalpo/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_defend_the_center --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_defend_the_center --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
sj21867/ai_art_exp2_vit_baroque
sj21867
2024-05-22T10:34:22Z
216
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-22T10:33:11Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer metrics: - accuracy model-index: - name: ai_art_exp2_vit_baroque results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ai_art_exp2_vit_baroque This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Accuracy: {'accuracy': 0.8833333333333333} - Loss: 0.7276 - Overall Accuracy: 0.8833 - Human Accuracy: 0.72 - Ld Accuracy: 0.97 - Sd Accuracy: 0.96 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | Overall Accuracy | Human Accuracy | Ld Accuracy | Sd Accuracy | |:-------------:|:-----:|:----:|:--------------------------------:|:---------------:|:----------------:|:--------------:|:-----------:|:-----------:| | 0.9747 | 0.96 | 18 | {'accuracy': 0.8666666666666667} | 0.7253 | 0.8667 | 0.6364 | 0.9813 | 0.9429 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
tedad09/ProvaLoRA-15Epochsc
tedad09
2024-05-22T10:32:56Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-22T10:32:53Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AI4BPM/gestion_financiere_model_512_3
AI4BPM
2024-05-22T10:32:55Z
106
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-22T10:30:51Z
--- license: apache-2.0 ---
wizard-who-wonders/model
wizard-who-wonders
2024-05-22T10:31:30Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-22T10:31:14Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** wizard-who-wonders - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
cloneofsimo/lavenderflow-5.6B
cloneofsimo
2024-05-22T10:24:02Z
0
54
null
[ "arxiv:2202.10261", "license:mit", "region:us" ]
null
2024-05-20T15:05:53Z
--- license: mit --- # LavenderFlow 5.6B v0 <p align="center"> <img src="https://pbs.twimg.com/media/GOKPG_Ra0AELCOg?format=jpg&name=4096x4096" alt="large" width="600"> </p> ✅Latent, MMDiT, muP, CFM, FSDP, recaped, 768x768, T5 ✅No strings attached, completely-open-every-step-of-the-way ✅Not SoTA😅(but.. not bad considering it was trained by one grad-student under total 3 weeks of development.) Severely undertrained! # How to use Once the model is trained with more compute, I will probably put more efforts to make this more accesible, but for now, have a look at [this notebook](https://github.com/cloneofsimo/minRF/blob/main/advanced/inference_t2i_example.ipynb) You would need to download the latest checkpoint via ```bash wget https://huggingface.co/cloneofsimo/lavenderflow-5.6B/resolve/highres-model_49153/ema1.pt ``` and load the checkpoint `ema1.pt` there. # No paper, no blog, not raising cash, who am I and why did I do this? I just do things for fun, no reasons but... Foundation models seems to only belong in companies. I tell you, while it is tempting to say: "You can't do this, you need team of engineers and researchers to make billion-scale foundation models" it is not the case, and one bored grad student can obviously very easily pull this off. Now you know, you know. # Warning: This is extremely undertrained model This is very early preview of my attempt to recreate SD3, trained on single-node, with batch size of 128, for only ~550k steps. Don't expect SoTA SD3, midjourney, dalle3 quality images with this model. I am only one person, with one 8xH100 node. *They have clusters of gpus*. # What did you do and how long did it take? 1. My first steps were to implement everything in torch and see if it works in MNIST, CIFAR-10. This took about 8 hours, including reading papers, implementing DiT, etc, everything. 2. Next was to scale up. I used muTransfer (basically i setup muP) to find optimal learning rate, and to do this I scaled upto 1B model with [imagenet.int8 dataset](https://github.com/cloneofsimo/imagenet.int8). Seeing the basin alignment, I scaled upto 5.6B parameters. 3. Then I moved to T2I training. I collected [Capfusion Dataset](https://huggingface.co/datasets/BAAI/CapsFusion-120M), which is subset of LAION-coco 4. I deduplicated with [SSCD embeddings](https://arxiv.org/abs/2202.10261) and [FAISS](https://faiss.ai/), using clustering method described in SD3. 5. I cached T5 (new [T5-large](https://huggingface.co/EleutherAI/pile-t5-large) from aieluther) and SDXL-VAE embeddings for all cropped 256x256 images. 6. For efficient sharding and to make better use of NFS, I put everything in [MosaicStreamingDataset](https://github.com/mosaicml/streaming) format, which was easy to use. 7. I rewrote my training code in DeepSpeed, utilizing Zero algorithm of stage-2, which does not shard the weights due to its small size. I got MFU of about 45 ~ 50 ~ 60% depending on the stage. 8. GPU GO BRR. there were some misconfigurations on the way, so I had to restart 3 times. This took about 550k total steps (3 days total) 9. Finally, I went through the same process all over again for [ye-pop dataset](https://huggingface.co/datasets/Ejafa/ye-pop), which is cleaned and recaptioned version of laion-pop dataset. This time with 768x768 resolution. # Acknowlegement Thank you to all authors of HDiT for bunch of pro tips and encouragements, specially @birchlabs. Thanks to 서승현, who is my coworker at @naver who also shares pro tips, and studies deepspeed with me. Thanks to @imbue.ai who provided most of the compute for this research project! Thanks to them, this is could be done at completely free cost! Thanks to people at @fal.ai who provided compute and tips, specially @isidentital and @burkaygur !! # Whats the future? @fal.ai has decided to provide more compute for this project in the future, so we decided to collaborate to make something much bigger, much better. You will see the upcoming version of this project with different names. In the near future, I also hope to work on other modalities, but I have high chance of doing that at Naver without much being open sourced.
ayushi12/finetuned-Blastocyst
ayushi12
2024-05-22T10:20:41Z
194
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-22T05:42:09Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer metrics: - accuracy model-index: - name: finetuned-Blastocyst results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-Blastocyst This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4672 - Accuracy: 0.8571 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.8256 | 0.7874 | 100 | 0.6851 | 0.7143 | | 1.0109 | 1.5748 | 200 | 0.6473 | 0.8571 | | 0.697 | 2.3622 | 300 | 0.5080 | 0.8571 | | 0.6829 | 3.1496 | 400 | 0.4237 | 0.8571 | | 0.6407 | 3.9370 | 500 | 0.4672 | 0.8571 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
Thodns/openai-whisper-small-colab
Thodns
2024-05-22T10:15:15Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-22T10:15:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Keerthanah2002/sytr
Keerthanah2002
2024-05-22T10:13:01Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-05-22T10:09:14Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### sytr Dreambooth model trained by Keerthanah2002 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept: ![0](https://huggingface.co/Keerthanah2002/sytr/resolve/main/sample_images/stray(20).jpg) ![1](https://huggingface.co/Keerthanah2002/sytr/resolve/main/sample_images/stray(10).jpg) ![2](https://huggingface.co/Keerthanah2002/sytr/resolve/main/sample_images/stray(5).jpg) ![3](https://huggingface.co/Keerthanah2002/sytr/resolve/main/sample_images/stray(3).jpg) ![4](https://huggingface.co/Keerthanah2002/sytr/resolve/main/sample_images/stray(8).jpg) ![5](https://huggingface.co/Keerthanah2002/sytr/resolve/main/sample_images/stray(11).jpg) ![6](https://huggingface.co/Keerthanah2002/sytr/resolve/main/sample_images/stray(13).jpg) ![7](https://huggingface.co/Keerthanah2002/sytr/resolve/main/sample_images/stray(9).jpg) ![8](https://huggingface.co/Keerthanah2002/sytr/resolve/main/sample_images/stray(4).jpg) ![9](https://huggingface.co/Keerthanah2002/sytr/resolve/main/sample_images/stray(22).jpg) ![10](https://huggingface.co/Keerthanah2002/sytr/resolve/main/sample_images/stray(15).jpg) ![11](https://huggingface.co/Keerthanah2002/sytr/resolve/main/sample_images/stray(7).jpg) ![12](https://huggingface.co/Keerthanah2002/sytr/resolve/main/sample_images/stray(2).jpg) ![13](https://huggingface.co/Keerthanah2002/sytr/resolve/main/sample_images/stray(1).jpg) ![14](https://huggingface.co/Keerthanah2002/sytr/resolve/main/sample_images/stray(14).jpg) ![15](https://huggingface.co/Keerthanah2002/sytr/resolve/main/sample_images/stray(18).jpg) ![16](https://huggingface.co/Keerthanah2002/sytr/resolve/main/sample_images/stray(16).jpg) ![17](https://huggingface.co/Keerthanah2002/sytr/resolve/main/sample_images/stray(6).jpg) ![18](https://huggingface.co/Keerthanah2002/sytr/resolve/main/sample_images/stray(17).jpg) ![19](https://huggingface.co/Keerthanah2002/sytr/resolve/main/sample_images/stray(21).jpg) ![20](https://huggingface.co/Keerthanah2002/sytr/resolve/main/sample_images/stray(19).jpg)
sj21867/ai_art_exp1_vit
sj21867
2024-05-22T10:06:16Z
196
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-22T09:50:32Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer metrics: - accuracy model-index: - name: ai_art_exp1_vit results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ai_art_exp1_vit This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Accuracy: {'accuracy': 0.9826666666666667} - Loss: 0.1296 - Overall Accuracy: 0.9827 - Human Accuracy: 0.966 - Ld Accuracy: 0.994 - Sd Accuracy: 0.988 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | Overall Accuracy | Human Accuracy | Ld Accuracy | Sd Accuracy | |:-------------:|:-----:|:----:|:--------------------------------:|:---------------:|:----------------:|:--------------:|:-----------:|:-----------:| | 0.1366 | 0.992 | 93 | {'accuracy': 0.9873333333333333} | 0.1221 | 0.9873 | 0.9816 | 0.9962 | 0.9836 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
Sercan/distil-whisper-large-v3-tr
Sercan
2024-05-22T10:02:22Z
88
1
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "speech-recognition", "Turkish", "ASR", "tr", "dataset:common_voice", "base_model:openai/whisper-large-v3", "base_model:finetune:openai/whisper-large-v3", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-21T06:50:40Z
--- language: - "tr" thumbnail: "url_to_thumbnail" tags: - speech-recognition - Turkish - ASR license: "apache-2.0" datasets: - common_voice metrics: - wer - cer base_model: "openai/whisper-large-v3" --- # distil-whisper-large-v3-tr ## Model Description `distil-whisper-large-v3-tr` is a distilled version of the Whisper model, fine-tuned for Turkish language tasks. This model has been trained and evaluated using a comprehensive dataset to achieve high accuracy in Turkish speech recognition. ## Training and Evaluation Metrics The model was trained and evaluated using the `wandb` tool, with the following results: ### Evaluation Metrics - **Cross-Entropy Loss (eval/ce_loss):** 0.53218 - **Epoch (eval/epoch):** 28 - **KL Loss (eval/kl_loss):** 0.34883 - **Total Loss (eval/loss):** 0.77457 - **Evaluation Time (eval/time):** 397.1784 seconds - **Word Error Rate (eval/wer):** 14.43288% - **Orthographic Word Error Rate (eval/wer_ortho):** 21.55298% ### Training Metrics - **Cross-Entropy Loss (train/ce_loss):** 0.04695 - **Epoch (train/epoch):** 28 - **KL Loss (train/kl_loss):** 0.24143 - **Learning Rate (train/learning_rate):** 0.0001 - **Total Loss (train/loss):** 0.27899 - **Training Time (train/time):** 12426.92106 seconds ## Run History ### Overall Metrics - **Real-Time Factor (all/rtf):** 392.23396 - **Word Error Rate (all/wer):** 14.33829 ### Common Voice 17.0 Turkish Pseudo-Labelled Dataset - **Real-Time Factor (common_voice_17_0_tr_pseudo_labelled/test/rtf):** 392.23396 - **Word Error Rate (common_voice_17_0_tr_pseudo_labelled/test/wer):** 14.33829 ## Author **Sercan Çepni** Email: [email protected] --- For any questions or further information, please feel free to contact the author.
BogdanTurbal/synt_data_v_3
BogdanTurbal
2024-05-22T10:01:35Z
2
0
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "diffusers-training", "lora", "base_model:stabilityai/sdxl-turbo", "base_model:adapter:stabilityai/sdxl-turbo", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-05-22T09:41:54Z
--- license: creativeml-openrail-m library_name: diffusers tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - diffusers-training - lora base_model: stabilityai/sdxl-turbo inference: true --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # LoRA text2image fine-tuning - BogdanTurbal/synt_data_v_3 These are LoRA adaption weights for stabilityai/sdxl-turbo. The weights were fine-tuned on the BogdanTurbal/gen_data_test dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) ![img_4](./image_4.png) ![img_5](./image_5.png) LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
Syed-Hasan-8503/Linear_Tiny_87M
Syed-Hasan-8503
2024-05-22T09:56:32Z
0
0
null
[ "arxiv:2405.06640", "license:apache-2.0", "region:us" ]
null
2024-05-22T05:15:59Z
--- license: apache-2.0 --- # Linear_Tiny_87M ## Introduction Linear transformers have emerged as a subquadratic-time alternative to softmax attention and have garnered significant interest due to their fixed-size recurrent state that lowers inference cost. However, their original formulation suffers from poor scaling and underperforms compute-matched transformers. Recent linear models such as RWKV and Mamba have attempted to address these shortcomings by proposing novel time-mixing and gating architectures, but pre-training large language models requires significant data and compute investments. Thus, the search for subquadratic architectures is limited by the availability of compute and quality pre-training datasets. As a cost-effective alternative to pre-training linear transformers, we propose Scalable UPtraining for Recurrent Attention (SUPRA). For more detail, refer to the [paper](https://arxiv.org/abs/2405.06640) Linear_Tiny_87M is a linear model that has been trained on a subset of redpajama dataset for 1 epoch on **1x A4000**. It took almost 4 hours for training to be completed. ## Usage Just download the checkpoint and afterwards run the following code snippet: ```python cd scripts python generate.py \ --model open_lm_87m \ --checkpoint /path/to/checkpoint.pt \ --positional-embedding-type head_rotary \ --input-text "Machine Learning is a" ```
NakanoMiku0-0/llama3-patent_finetune
NakanoMiku0-0
2024-05-22T09:50:08Z
11
0
transformers
[ "transformers", "gguf", "llama", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-22T08:31:49Z
--- license: apache-2.0 ---
abigail8n21/ppo-LunarLander-v2
abigail8n21
2024-05-22T09:45:48Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-05-22T09:45:21Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 244.28 +/- 17.63 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
jmainz/wav2vec2-mms-1b-l1107-mus-asr-10m
jmainz
2024-05-22T09:41:43Z
8
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "base_model:facebook/mms-1b-l1107", "base_model:finetune:facebook/mms-1b-l1107", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-20T01:56:50Z
--- license: cc-by-nc-4.0 base_model: facebook/mms-1b-l1107 tags: - generated_from_trainer metrics: - wer model-index: - name: wav2vec2-mms-1b-l1107-mus-asr-10m results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-mms-1b-l1107-mus-asr-10m This model is a fine-tuned version of [facebook/mms-1b-l1107](https://huggingface.co/facebook/mms-1b-l1107) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4432 - Wer: 0.4844 - Cer: 0.1329 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.4 - num_epochs: 12 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:| | No log | 4.0 | 108 | 3.4128 | 1.0 | 1.0 | | No log | 5.0 | 135 | 3.2507 | 1.0 | 1.0 | | No log | 6.0 | 162 | 2.9310 | 1.0 | 0.3287 | | No log | 7.0 | 189 | 2.3864 | 0.875 | 0.1958 | | 5.6606 | 8.0 | 216 | 2.5689 | 0.7188 | 0.1772 | | 5.6606 | 9.0 | 243 | 2.2867 | 0.625 | 0.1655 | | 5.6606 | 10.0 | 270 | 2.3748 | 0.5 | 0.1352 | | 5.6606 | 11.0 | 297 | 2.4432 | 0.4844 | 0.1329 | | 5.6606 | 12.0 | 324 | 2.5012 | 0.5156 | 0.1352 | ### Framework versions - Transformers 4.41.0 - Pytorch 1.10.1+cu111 - Datasets 2.19.1 - Tokenizers 0.19.1
diozhug/tensorflow-trash-classification
diozhug
2024-05-22T09:41:03Z
18
0
keras
[ "keras", "en", "dataset:garythung/trashnet", "region:us" ]
null
2024-05-21T11:41:42Z
--- datasets: - garythung/trashnet language: - en metrics: - accuracy library_name: keras --- This repository contains a sample work to classify garbage type based on resized images on this [repository](https://huggingface.co/datasets/garythung/trashnet). There are 2 models available: - trash-classification-no-aug.keras - trash-classification-aug.keras The `trash-classification-no-aug.keras` model trained without data augmentation: ```python datagen = ImageDataGenerator( validation_split=0.2 ) ``` While the `trash-classification-aug.keras` model trained with more data augmentation works in the dataset: ```python # With data augmentation datagen = ImageDataGenerator( rescale=1./255, validation_split=0.2, width_shift_range=0.1, height_shift_range=0.1, horizontal_flip=True ) ``` The models trained with Tensorflow Functional API by using this approach: ``` Conv --> BatchNorm --> Conv --> BatchNorm --> MaxPooling (3x) ``` For the detailed description about the training process and models' performace, you can visit this Github [repository](https://github.com/dioz95/trash-classification/tree/main).
Hemg/Multiple-choice-qa
Hemg
2024-05-22T09:38:46Z
1,680
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "multiple-choice", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
multiple-choice
2024-05-22T08:57:46Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: Multiple-choice-qa results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Multiple-choice-qa This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5306 - Accuracy: 0.7961 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7347 | 1.0 | 2299 | 0.5306 | 0.7961 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
anzeo/fine_tuned_cb_sloberta
anzeo
2024-05-22T09:35:29Z
104
0
transformers
[ "transformers", "tensorboard", "safetensors", "camembert", "text-classification", "generated_from_trainer", "base_model:EMBEDDIA/sloberta", "base_model:finetune:EMBEDDIA/sloberta", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-22T09:33:49Z
--- license: cc-by-sa-4.0 base_model: EMBEDDIA/sloberta tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: fine_tuned_cb_sloberta results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fine_tuned_cb_sloberta This model is a fine-tuned version of [EMBEDDIA/sloberta](https://huggingface.co/EMBEDDIA/sloberta) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4656 - Accuracy: 0.6364 - F1: 0.5977 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 400 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-------:|:----:|:---------------:|:--------:|:------:| | 0.7547 | 3.5714 | 50 | 1.5113 | 0.3182 | 0.1536 | | 0.5654 | 7.1429 | 100 | 0.9706 | 0.6364 | 0.5977 | | 0.2549 | 10.7143 | 150 | 1.2822 | 0.5909 | 0.5481 | | 0.1062 | 14.2857 | 200 | 1.4263 | 0.5 | 0.4606 | | 0.0407 | 17.8571 | 250 | 1.2224 | 0.6364 | 0.5977 | | 0.0233 | 21.4286 | 300 | 1.4225 | 0.5909 | 0.5545 | | 0.0189 | 25.0 | 350 | 1.4185 | 0.6364 | 0.5977 | | 0.0173 | 28.5714 | 400 | 1.4656 | 0.6364 | 0.5977 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.1.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
blockblockblock/airoboros-l2-13b-gpt4-1.4.1-bpw6-exl2
blockblockblock
2024-05-22T09:33:40Z
7
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "dataset:jondurbin/airoboros-gpt4-1.4.1", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "6-bit", "exl2", "region:us" ]
text-generation
2024-05-22T09:30:19Z
--- license: other datasets: - jondurbin/airoboros-gpt4-1.4.1 --- ### Overview Llama 2 13b fine tune using https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.4.1 See the previous llama 65b model card for info: https://hf.co/jondurbin/airoboros-65b-gpt4-1.4 ### Licence and usage restrictions This model was built on llama-2, which has a proprietary/custom Meta license. - See the LICENSE.txt file attached for the original license, along with USE_POLICY.md which was also provided by Meta. The data used to fine-tune the llama-2-13b-hf model was generated by GPT4 via OpenAI API calls.using [airoboros](https://github.com/jondurbin/airoboros) - The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI - what does *compete* actually mean here? - these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place - if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works - the training data used in essentially all large language models includes a significant of copyrighted or otherwise unallowable licensing in the first place - other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2 I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly. Your best bet is probably to avoid using this commercially due to the OpenAI API usage. Either way, by using this model, you agree to completely indemnify me.
damgomz/ft_16_1e6_base
damgomz
2024-05-22T09:31:12Z
105
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "fill-mask", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-05-21T15:15:51Z
--- language: en tags: - fill-mask kwargs: timestamp: '2024-05-22T11:31:06' project_name: ft_16_1e6_base_emissions_tracker run_id: c987d7ad-c646-4380-8797-f9eaa5b47769 duration: 67712.96358895302 emissions: 0.0409741595063809 emissions_rate: 6.051154363160325e-07 cpu_power: 42.5 gpu_power: 0.0 ram_power: 3.75 cpu_energy: 0.7993875543463544 gpu_energy: 0 ram_energy: 0.0705336682684716 energy_consumed: 0.8699212226148262 country_name: Switzerland country_iso_code: CHE region: .nan cloud_provider: .nan cloud_region: .nan os: Linux-5.14.0-70.30.1.el9_0.x86_64-x86_64-with-glibc2.34 python_version: 3.10.4 codecarbon_version: 2.3.4 cpu_count: 2 cpu_model: Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz gpu_count: .nan gpu_model: .nan longitude: .nan latitude: .nan ram_total_size: 10 tracking_mode: machine on_cloud: N pue: 1.0 --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 67712.96358895302 | | Emissions (Co2eq in kg) | 0.0409741595063809 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 0.7993875543463544 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.0705336682684716 | | Consumed energy (kWh) | 0.8699212226148262 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.13034745490873456 | | Emissions (Co2eq in kg) | 0.026520910739006597 | ## Note 21 May 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_16_1e6_base | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 1e-06 | | batch_size | 16 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 32586 | ## Training and Testing steps Epoch | Train Loss | Test Loss | Accuracy | Recall ---|---|---|---|--- | 0 | 0.585760 | 0.506497 | 0.758684 | 0.863976 | | 1 | 0.455673 | 0.426805 | 0.806273 | 0.827536 | | 2 | 0.385133 | 0.391296 | 0.821448 | 0.816982 | | 3 | 0.340157 | 0.381730 | 0.828961 | 0.861879 | | 4 | 0.298933 | 0.385058 | 0.831023 | 0.834253 | | 5 | 0.258766 | 0.392635 | 0.825573 | 0.849310 |
ayushi12/finetuned-Blastocyst_ICM
ayushi12
2024-05-22T09:29:27Z
197
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-22T06:39:49Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - image-classification - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: finetuned-Blastocyst_ICM results: - task: name: Image Classification type: image-classification dataset: name: Blastocyst_ICM type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.8571428571428571 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-Blastocyst_ICM This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the Blastocyst_ICM dataset. It achieves the following results on the evaluation set: - Loss: 0.4171 - Accuracy: 0.8571 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 60 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 0.6635 | 0.7874 | 100 | 0.6705 | 0.7619 | | 0.6585 | 1.5748 | 200 | 0.4977 | 0.8571 | | 0.8002 | 2.3622 | 300 | 0.6194 | 0.8095 | | 0.6907 | 3.1496 | 400 | 0.6097 | 0.7619 | | 0.5806 | 3.9370 | 500 | 0.6073 | 0.7143 | | 0.6059 | 4.7244 | 600 | 0.4171 | 0.8571 | | 0.5804 | 5.5118 | 700 | 0.4862 | 0.8095 | | 0.6223 | 6.2992 | 800 | 0.4292 | 0.8095 | | 0.6696 | 7.0866 | 900 | 0.4574 | 0.8095 | | 0.5269 | 7.8740 | 1000 | 0.4643 | 0.8571 | | 0.5089 | 8.6614 | 1100 | 0.4634 | 0.8095 | | 0.4782 | 9.4488 | 1200 | 0.5434 | 0.8095 | | 0.5426 | 10.2362 | 1300 | 0.6587 | 0.6667 | | 0.5604 | 11.0236 | 1400 | 0.4834 | 0.7143 | | 0.627 | 11.8110 | 1500 | 0.5787 | 0.7619 | | 0.4389 | 12.5984 | 1600 | 0.6285 | 0.7619 | | 0.3936 | 13.3858 | 1700 | 0.7870 | 0.7619 | | 0.4023 | 14.1732 | 1800 | 0.6466 | 0.8095 | | 0.4683 | 14.9606 | 1900 | 0.5086 | 0.7619 | | 0.4502 | 15.7480 | 2000 | 0.4940 | 0.8095 | | 0.47 | 16.5354 | 2100 | 0.6389 | 0.8095 | | 0.4109 | 17.3228 | 2200 | 0.4713 | 0.8571 | | 0.4654 | 18.1102 | 2300 | 0.6457 | 0.7619 | | 0.3359 | 18.8976 | 2400 | 0.4706 | 0.8095 | | 0.3343 | 19.6850 | 2500 | 0.6813 | 0.8095 | | 0.4359 | 20.4724 | 2600 | 0.8620 | 0.7143 | | 0.446 | 21.2598 | 2700 | 0.5914 | 0.9048 | | 0.2901 | 22.0472 | 2800 | 0.8846 | 0.8095 | | 0.3261 | 22.8346 | 2900 | 0.5528 | 0.8571 | | 0.4159 | 23.6220 | 3000 | 0.6383 | 0.8095 | | 0.3056 | 24.4094 | 3100 | 0.8316 | 0.8095 | | 0.27 | 25.1969 | 3200 | 1.0901 | 0.7619 | | 0.299 | 25.9843 | 3300 | 0.9170 | 0.7143 | | 0.2433 | 26.7717 | 3400 | 1.0046 | 0.8095 | | 0.2623 | 27.5591 | 3500 | 0.8359 | 0.7619 | | 0.2526 | 28.3465 | 3600 | 0.5864 | 0.8571 | | 0.3307 | 29.1339 | 3700 | 0.6282 | 0.8571 | | 0.2038 | 29.9213 | 3800 | 1.0462 | 0.7619 | | 0.3419 | 30.7087 | 3900 | 1.2281 | 0.7143 | | 0.2625 | 31.4961 | 4000 | 0.9750 | 0.7619 | | 0.1707 | 32.2835 | 4100 | 1.0191 | 0.8095 | | 0.2046 | 33.0709 | 4200 | 0.9401 | 0.8095 | | 0.2009 | 33.8583 | 4300 | 0.9374 | 0.8571 | | 0.222 | 34.6457 | 4400 | 1.1820 | 0.8095 | | 0.2469 | 35.4331 | 4500 | 0.8827 | 0.8571 | | 0.1348 | 36.2205 | 4600 | 0.8871 | 0.8095 | | 0.2494 | 37.0079 | 4700 | 0.8910 | 0.8095 | | 0.1272 | 37.7953 | 4800 | 0.9666 | 0.8095 | | 0.1682 | 38.5827 | 4900 | 0.8490 | 0.8571 | | 0.1495 | 39.3701 | 5000 | 0.9831 | 0.8571 | | 0.174 | 40.1575 | 5100 | 1.2082 | 0.7619 | | 0.1375 | 40.9449 | 5200 | 1.2987 | 0.7619 | | 0.1043 | 41.7323 | 5300 | 1.1446 | 0.8095 | | 0.2958 | 42.5197 | 5400 | 1.3286 | 0.7143 | | 0.1882 | 43.3071 | 5500 | 1.1686 | 0.8095 | | 0.1322 | 44.0945 | 5600 | 1.3669 | 0.7619 | | 0.1551 | 44.8819 | 5700 | 1.2882 | 0.7619 | | 0.1749 | 45.6693 | 5800 | 1.2481 | 0.8095 | | 0.1064 | 46.4567 | 5900 | 1.3224 | 0.8095 | | 0.1947 | 47.2441 | 6000 | 1.1225 | 0.8095 | | 0.1495 | 48.0315 | 6100 | 1.5170 | 0.7143 | | 0.2192 | 48.8189 | 6200 | 0.9928 | 0.8571 | | 0.1303 | 49.6063 | 6300 | 1.0310 | 0.8095 | | 0.2188 | 50.3937 | 6400 | 1.0219 | 0.8095 | | 0.1485 | 51.1811 | 6500 | 1.0695 | 0.8095 | | 0.1065 | 51.9685 | 6600 | 1.2479 | 0.8095 | | 0.1731 | 52.7559 | 6700 | 1.1878 | 0.8095 | | 0.2507 | 53.5433 | 6800 | 1.1535 | 0.7619 | | 0.1191 | 54.3307 | 6900 | 1.0544 | 0.8571 | | 0.1048 | 55.1181 | 7000 | 1.1502 | 0.8095 | | 0.1715 | 55.9055 | 7100 | 1.2187 | 0.8095 | | 0.125 | 56.6929 | 7200 | 1.2635 | 0.8095 | | 0.1022 | 57.4803 | 7300 | 1.2571 | 0.8095 | | 0.0987 | 58.2677 | 7400 | 1.2676 | 0.8095 | | 0.0761 | 59.0551 | 7500 | 1.2635 | 0.8095 | | 0.1184 | 59.8425 | 7600 | 1.2629 | 0.8095 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
sihyeok000/DETRtest2
sihyeok000
2024-05-22T09:27:50Z
190
0
transformers
[ "transformers", "safetensors", "detr", "object-detection", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
object-detection
2024-05-22T09:27:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
qwp4w3hyb/Phi-3-medium-128k-instruct-iMat-GGUF
qwp4w3hyb
2024-05-22T09:27:39Z
126
4
null
[ "gguf", "nlp", "code", "microsoft", "phi", "instruct", "finetune", "imatrix", "importance matrix", "text-generation", "multilingual", "base_model:microsoft/Phi-3-medium-128k-instruct", "base_model:quantized:microsoft/Phi-3-medium-128k-instruct", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-05-21T18:05:39Z
--- license: mit license_link: >- https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/resolve/main/LICENSE language: - multilingual pipeline_tag: text-generation base_model: microsoft/Phi-3-medium-128k-instruct tags: - nlp - code - microsoft - phi - instruct - finetune - gguf - imatrix - importance matrix --- # Quant Infos - Requires latest llama.cpp master; - quants done with an importance matrix for improved quantization loss - gguf & imatrix generated from bf16 for "optimal" accuracy loss (some say this is snake oil, but it can't hurt) - Wide coverage of different gguf quant types from Q\_8\_0 down to IQ1\_S - Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) commit [201cc11afa0a1950e1f632390b2ac6c937a0d8f0](https://github.com/ggerganov/llama.cpp/commit/201cc11afa0a1950e1f632390b2ac6c937a0d8f0) - Imatrix generated with [this](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) multi-purpose dataset. ``` ./imatrix -c 512 -m $model_name-bf16.gguf -f $llama_cpp_path/groups_merged.txt -o $out_path/imat-bf16-gmerged.dat ``` # Original Model Card: ## Model Summary The Phi-3-Medium-128K-Instruct is a 14B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. The model belongs to the Phi-3 family with the Medium version in two variants [4k](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) which is the context length (in tokens) that it can support. The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures. When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3-Medium-128K-Instruct showcased a robust and state-of-the-art performance among models of the same-size and next-size-up. Resources and Technical Documentation: + [Phi-3 Microsoft Blog](https://aka.ms/Phi-3Build2024) + [Phi-3 Technical Report](https://aka.ms/phi3-tech-report) + [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai) + [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook) | | Short Context | Long Context | | ------- | ------------- | ------------ | | Mini | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx) ; [[GGUF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct-onnx)| | Small | 8K [[HF]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct-onnx-cuda)| | Medium | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda)| | Vision | | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct)| ## Intended Uses **Primary use cases** The model is intended for broad commercial and research use in English. The model provides uses for general purpose AI systems and applications which require : 1) Memory/compute constrained environments 2) Latency bound scenarios 3) Strong reasoning (especially code, math and logic) Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features. **Use case considerations** Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case. Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under. ## How to Use Phi-3-Medium-128k-Instruct has been integrated in the development version (4.40.2) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following: * When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function. * Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source. The current `transformers` version can be verified with: `pip list | grep transformers`. Phi-3-Medium-128k-Instruct is also available in [Azure AI Studio](https://aka.ms/phi3-azure-ai). ### Tokenizer Phi-3-Medium-128k-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size. ### Chat Format Given the nature of the training data, the Phi-3-Medium-128k-Instruct model is best suited for prompts using the chat format as follows. You can provide the prompt as a question with a generic template as follow: ```markdown <|user|>\nQuestion <|end|>\n<|assistant|> ``` For example: ```markdown <|user|> How to explain Internet for a medieval knight?<|end|> <|assistant|> ``` where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following: ```markdown <|user|> I am going to Paris, what should I see?<|end|> <|assistant|> Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|> <|user|> What is so great about #1?<|end|> <|assistant|> ``` ### Sample inference code This code snippets show how to get quickly started with running the model on a GPU: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline torch.random.manual_seed(0) model_id = "microsoft/Phi-3-medium-128k-instruct" model = AutoModelForCausalLM.from_pretrained( model_id, device_map="cuda", torch_dtype="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_id) messages = [ {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}, {"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."}, {"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"}, ] pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, ) generation_args = { "max_new_tokens": 500, "return_full_text": False, "temperature": 0.0, "do_sample": False, } output = pipe(messages, **generation_args) print(output[0]['generated_text']) ``` *Some applications/frameworks might not include a BOS token (`<s>`) at the start of the conversation. Please ensure that it is included since it provides more reliable results.* ## Responsible AI Considerations Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include: + Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. + Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. + Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case. + Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. + Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses. Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include: + Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques. + High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. + Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). + Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. + Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations. ## Training ### Model * Architecture: Phi-3-Medium-128k-Instruct has 14B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines. * Inputs: Text. It is best suited for prompts using chat format. * Context length: 128k tokens * GPUs: 512 H100-80G * Training time: 42 days * Training data: 4.8T tokens * Outputs: Generated text in response to the input * Dates: Our models were trained between February and April 2024 * Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models. * Release dates: The model weight is released on May 21, 2024. ### Datasets Our training data includes a wide variety of sources, totaling 4.8 trillion tokens (including 10% multilingual), and is a combination of 1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code; 2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.); 3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness. We are focusing on the quality of data that could potentially improve the reasoning ability for the model, and we filter the publicly available documents to contain the correct level of knowledge. As an example, the result of a game in premier league in a particular day might be good training data for frontier models, but we need to remove such information to leave more model capacity for reasoning for the small size models. More details about data can be found in the [Phi-3 Technical Report](https://aka.ms/phi3-tech-report). ## Benchmarks We report the results for Phi-3-Medium-128k-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Mixtral-8x22b, Gemini-Pro, Command R+ 104B, Llama-3-70B-Instruct, GPT-3.5-Turbo-1106, and GPT-4-Turbo-1106(Chat). All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation. As is now standard, we use few-shot prompts to evaluate the models, at temperature 0. The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3. More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model. The number of k–shot examples is listed per-benchmark. |Benchmark|Phi-3-Medium-128k-Instruct<br>14b|Command R+<br>104B|Mixtral<br>8x22B|Llama-3-70B-Instruct|GPT3.5-Turbo<br>version 1106|Gemini<br>Pro|GPT-4-Turbo<br>version 1106 (Chat)| |---------|-----------------------|--------|-------------|-------------------|-------------------|----------|------------------------| |AGI Eval<br>5-shot|49.7|50.1|54.0|56.9|48.4|49.0|59.6| |MMLU<br>5-shot|76.6|73.8|76.2|80.2|71.4|66.7|84.0| |BigBench Hard<br>3-shot|77.9|74.1|81.8|80.4|68.3|75.6|87.7| |ANLI<br>7-shot|57.3|63.4|65.2|68.3|58.1|64.2|71.7| |HellaSwag<br>5-shot|81.6|78.0|79.0|82.6|78.8|76.2|88.3| |ARC Challenge<br>10-shot|91.0|86.9|91.3|93.0|87.4|88.3|95.6| |ARC Easy<br>10-shot|97.6|95.7|96.9|98.2|96.3|96.1|98.8| |BoolQ<br>2-shot|86.5|86.1|82.7|89.1|79.1|86.4|91.3| |CommonsenseQA<br>10-shot|82.2|82.0|82.0|84.4|79.6|81.8|86.7| |MedQA<br>2-shot|67.6|59.2|67.9|78.5|63.4|58.2|83.7| |OpenBookQA<br>10-shot|87.2|86.8|88.6|91.8|86.0|86.4|93.4| |PIQA<br>5-shot|87.8|86.4|85.0|85.3|86.6|86.2|90.1| |Social IQA<br>5-shot|79.0|75.3|78.2|81.1|68.3|75.4|81.7| |TruthfulQA (MC2)<br>10-shot|74.3|57.8|67.4|81.9|67.7|72.6|85.2| |WinoGrande<br>5-shot|78.9|77.0|75.3|83.3|68.8|72.2|86.7| |TriviaQA<br>5-shot|73.9|82.8|84.5|78.5|85.8|80.2|73.3| |GSM8K Chain of Thought<br>8-shot|87.5|78.3|83.8|93.5|78.1|80.4|94.2| |HumanEval<br>0-shot|58.5|61.6|39.6|78.7|62.2|64.4|79.9| |MBPP<br>3-shot|73.8|68.9|70.7|81.3|77.8|73.2|86.7| |Average|77.3|75.0|76.3|82.5|74.3|75.4|85.2| We take a closer look at different categories across 80 public benchmark datasets at the table below: |Benchmark|Phi-3-Medium-128k-Instruct<br>14b|Command R+<br>104B|Mixtral<br>8x22B|Llama-3-70B-Instruct|GPT3.5-Turbo<br>version 1106|Gemini<br>Pro|GPT-4-Turbo<br>version 1106 (Chat)| |--------|------------------------|--------|-------------|-------------------|-------------------|----------|------------------------| | Popular aggregated benchmark | 72.3 | 69.9 | 73.4 | 76.3 | 67.0 | 67.5 | 80.5 | | Reasoning | 83.2 | 79.3 | 81.5 | 86.7 | 78.3 | 80.4 | 89.3 | | Language understanding | 75.3 | 75.7 | 78.7 | 77.9 | 70.4 | 75.3 | 81.6 | | Code generation | 64.2 | 68.6 | 60.0 | 69.3 | 70.4 | 66.7 | 76.1 | | Math | 52.9 | 45.3 | 52.5 | 59.7 | 52.8 | 50.9 | 67.1 | | Factual knowledge | 47.5 | 60.3 | 60.6 | 52.4 | 63.4 | 54.6 | 45.9 | | Multilingual | 62.2 | 67.8 | 69.8 | 62.0 | 67.0 | 73.4 | 78.2 | | Robustness | 70.2 | 57.9 | 65.5 | 78.7 | 69.3 | 69.7 | 84.6 | ## Software * [PyTorch](https://github.com/pytorch/pytorch) * [DeepSpeed](https://github.com/microsoft/DeepSpeed) * [Transformers](https://github.com/huggingface/transformers) * [Flash-Attention](https://github.com/HazyResearch/flash-attention) ## Hardware Note that by default, the Phi-3-Medium model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types: * NVIDIA A100 * NVIDIA A6000 * NVIDIA H100 If you want to run the model on: + Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [128k](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda) ## Cross Platform Support ONNX runtime ecosystem now supports Phi3 Medium models across platforms and hardware. Optimized phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML GPU acceleration is supported for Windows desktops GPUs (AMD, Intel, and NVIDIA). Along with DML, ONNX Runtime provides cross platform support for Phi3 Medium across a range of devices CPU, GPU, and mobile. Here are some of the optimized configurations we have added: 1. ONNX models for int4 DML: Quantized to int4 via AWQ 2. ONNX model for fp16 CUDA 3. ONNX model for int4 CUDA: Quantized to int4 via RTN 4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN ## License The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-medium-128k/resolve/main/LICENSE). ## Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
linzw/PASTED-Classification
linzw
2024-05-22T09:21:19Z
91
0
transformers
[ "transformers", "safetensors", "longformer", "token-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-05-22T08:49:44Z
--- license: apache-2.0 ---
theglassofwater/finetuning_8.0epochs
theglassofwater
2024-05-22T09:18:09Z
196
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T09:17:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hamzabennz/whisperDAR
hamzabennz
2024-05-22T09:13:29Z
101
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "ar", "dataset:team4/8dretna_daridja", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-16T12:18:53Z
--- language: - ar license: apache-2.0 base_model: openai/whisper-small tags: - hf-asr-leaderboard - generated_from_trainer datasets: - team4/8dretna_daridja metrics: - wer model-index: - name: whisperDAR results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: 8dretna_daridja type: team4/8dretna_daridja args: 'split: test' metrics: - name: Wer type: wer value: 94.27182780767228 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisperDAR This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the 8dretna_daridja dataset. It achieves the following results on the evaluation set: - Loss: 4.6653 - Wer: 94.2718 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 10 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - training_steps: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:-------:| | 4.3481 | 0.0107 | 5 | 4.6653 | 94.2718 | ### Framework versions - Transformers 4.42.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
IR-Cocktail/roberta-base-mean-v3-msmarco
IR-Cocktail
2024-05-22T09:08:06Z
5
0
sentence-transformers
[ "sentence-transformers", "safetensors", "roberta", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-05-22T07:55:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 6653 with parameters: ``` {'batch_size': 75, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 10000, "evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "correct_bias": false, "eps": 1e-06, "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
hgnoi/9tr6jJz0ckB9Wc71
hgnoi
2024-05-22T09:06:26Z
131
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T09:04:48Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AI4MIRO/Dose_prediction_han_PBS
AI4MIRO
2024-05-22T09:06:15Z
0
0
null
[ "region:us" ]
null
2024-01-03T16:00:25Z
<h1 style="border-bottom: 2px solid black; font-size: 100px;" align="center"> HDUNet PBS </h1> _Trained by Margerie Huet Dastarac ._ <br> _Training date: 06/06/2023 ._ ## 1. Task Description Prediction of the dose distribution with proton therapy beam scanning treatment. ## 2. Model ### 2.1. Architecture <img width="100%" src="https://cdn-uploads.huggingface.co/production/uploads/65c9dbefd6cbf9dfed67367e/xGGUE5spRY6tar5R4VeKX.png" alt="error while loading image"> _Figure 1: HDUNet PBS architecture_ ### 2.2. Input <ul> <li> CT</li> <li> Target volumes contours and prescription</li> <li> Organs at risks contours</li> </ul> ### 2.3. Output <ul> <li> DOSE</li> </ul> ### 2.4 Training details <ul> <li> Number of epoch: 400 </li> <li> Loss function: MSE loss </li> <li> Optimizer: AdamW </li> <li> Learning Rate: 0.0001 </li> <li> Dropout: No </li> <li> Patch size in voxels: (128,128,128) </li> <li> Data augmentation used: <ul> <li> RandCrop</li> </ul> </li> </ul> ## 3. Dataset <ul> <li> Location: Oropharynx </li> <li> Training set size: 57 </li> <li> Resolution in mm: 3x3x3 </li> </ul> ## Performance + TBD
hgnoi/dqT8Dp6MP0odz9lf
hgnoi
2024-05-22T09:06:05Z
126
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T09:04:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hgnoi/CJZMW0R7G6qmQF1N
hgnoi
2024-05-22T09:05:46Z
130
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T09:03:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
linzw/PASTED-aggregate
linzw
2024-05-22T08:59:58Z
91
0
transformers
[ "transformers", "safetensors", "longformer", "token-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-05-22T08:50:54Z
--- license: apache-2.0 ---
LeoZZzzZZ/bert-base-uncased-finetuned-fact
LeoZZzzZZ
2024-05-22T08:57:44Z
69
0
transformers
[ "transformers", "tf", "tensorboard", "bert", "text-classification", "generated_from_keras_callback", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-19T04:43:48Z
--- license: apache-2.0 base_model: google-bert/bert-base-uncased tags: - generated_from_keras_callback model-index: - name: LeoZZzzZZ/bert-base-uncased-finetuned-fact results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # LeoZZzzZZ/bert-base-uncased-finetuned-fact This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3218 - Validation Loss: 0.6480 - Train Accuracy: 0.7740 - Train Precision: 0.7726 - Train Recall: 0.7740 - Train F1: 0.7725 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 90930, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Train Precision | Train Recall | Train F1 | Epoch | |:----------:|:---------------:|:--------------:|:---------------:|:------------:|:--------:|:-----:| | 0.4913 | 0.5867 | 0.7729 | 0.7707 | 0.7729 | 0.7706 | 0 | | 0.3218 | 0.6480 | 0.7740 | 0.7726 | 0.7740 | 0.7725 | 1 | ### Framework versions - Transformers 4.41.0 - TensorFlow 2.15.0 - Datasets 2.19.1 - Tokenizers 0.19.1
linzw/PASTED-grammatical
linzw
2024-05-22T08:55:25Z
93
0
transformers
[ "transformers", "safetensors", "longformer", "token-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-05-22T08:50:01Z
--- license: apache-2.0 ---
Jubliano/wav2vec2-large-xls-r-300m-ipa-INTERNATIONAL1.6WithoutSpaces
Jubliano
2024-05-22T08:52:05Z
5
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-21T01:19:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
matthieuzone/BUCHETTE_DE_CHEVREter
matthieuzone
2024-05-22T08:51:57Z
4
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-21T16:28:34Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of sks cheese widget: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - matthieuzone/BUCHETTE_DE_CHEVREter <Gallery /> ## Model description These are matthieuzone/BUCHETTE_DE_CHEVREter LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of sks cheese to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](matthieuzone/BUCHETTE_DE_CHEVREter/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
sihyeok000/DETR_test
sihyeok000
2024-05-22T08:48:28Z
190
0
transformers
[ "transformers", "safetensors", "detr", "image-feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
image-feature-extraction
2024-05-22T08:48:21Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
werty1248/Llama-3-Ko-8B-Instruct-AOG
werty1248
2024-05-22T08:45:38Z
2,226
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "ko", "dataset:beomi/KoAlpaca-v1.1a", "dataset:kyujinpy/OpenOrca-KO", "dataset:nlpai-lab/openassistant-guanaco-ko", "base_model:beomi/Llama-3-Open-Ko-8B", "base_model:finetune:beomi/Llama-3-Open-Ko-8B", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-21T14:24:34Z
--- library_name: transformers license: llama3 base_model: beomi/Llama-3-Open-Ko-8B datasets: - beomi/KoAlpaca-v1.1a - kyujinpy/OpenOrca-KO - nlpai-lab/openassistant-guanaco-ko language: - ko pipeline_tag: text-generation --- # Model Card <!-- Provide a quick summary of what the model is/does. --> ~아오지~ ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> [beomi/Llama-3-Open-Ko-8B](https://huggingface.co/beomi/Llama-3-Open-Ko-8B) (최신 버전)의 Instruction tuning 버전 모델입니다. - Dataset: - [beomi/Ko**A**lpaca-v1.1a](https://huggingface.co/datasets/beomi/KoAlpaca-v1.1a) - [kyujinpy/Open**O**rca-KO](https://huggingface.co/datasets/kyujinpy/OpenOrca-KO) - [nlpai-lab/openassistant-**g**uanaco-ko](https://huggingface.co/datasets/nlpai-lab/openassistant-guanaco-ko) - Instruction format: - alpaca ### Dataset - 상업적으로 이용 가능한 데이터 셋을 사용하였습니다. - 향후 비교를 위하여, 수학/코딩 관련 질문이 많은 데이터 셋(ex: kyujinpy/KOpen-platypus)은 제외하였습니다. - 멀티 턴 대화 데이터(nlpai-lab/openassistant-guanaco-ko)를 추가해보았습니다. - beomi/KoAlpaca-v1.1a - 80% 확률로 instruction 추가("당신은 인공지능 비서입니다. 사용자가 답을 이해하기 위해 외부에서 검색할 필요가 없도록 상세한 답변을 제공하세요.") - kyujinpy/OpenOrca-KO - 변경사항 없음 - nlpai-lab/openassistant-guanaco-ko - 멀티 턴 대화의 경우, 마지막 Assistant의 답변을 목표로 함 - Alpaca format에 맞춰 이전 대화 내용에 대한 전처리 수행 - 80% 확률로 instruction 추가 ("당신은 인공지능 비서입니다. 사용자가 답을 이해하기 위해 외부에서 검색할 필요가 없도록 상세한 답변을 제공하세요.") ### Training details Training: Axolotl을 이용해 LoRA로 3epoch 학습 시켰습니다. - lora_r: 32 - lora_alpha: 32 - lora_dropout: 0.05 - gradient_accumulation_steps: 8 - micro_batch_size: 4 - num_epochs: 3 - learning_rate: 0.0002 (2e-4) - lr_scheduler: cosine - warmup_steps: 50 - sequence_len: 4096 - bf16 학습 시간: 1xA100, 약 8시간 ### Evaluation - boolq를 제외하면 큰 정확도 향상은 없었습니다. - 5shot kobest (Accuracy) | Tasks |[werty1248/Llama-3-Ko-8B-OpenOrca](https://huggingface.co/werty1248/Llama-3-Ko-8B-OpenOrca)|[beomi/Llama-3-Open-Ko-8B](https://huggingface.co/beomi/Llama-3-Open-Ko-8B)|[werty1248/Llama-3-Ko-8B-Instruct-AOG](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-AOG)| |----------------|------:|------:|------:| |kobest_boolq |0.7158±0.0120|0.7963±0.0108|0.8312±0.0100| |kobest_copa |0.7620±0.0135|0.8110±0.0124|0.8120±0.0124| |kobest_hellaswag|0.4740±0.0224|0.4780±0.0224|0.4700±0.0223| |kobest_sentineg |0.9471±0.0112|0.9622±0.0096|0.9647±0.0093| |kobest_wic |0.6079±0.0138|0.5778±0.0139|0.5937±0.0138| ### Format & Examples - Instruction - Input - Response 형태가 권장되지만, 경험적으로 Question - Instruction - Response로 사용하는 것이 더 나은 것 같습니다. - 잘 된 답변 예시 ``` input = """### Question: 피보나치 수열이 뭐야? 그리고 피보나치 수열에 대해 파이썬 코드를 짜줘볼래? ### Instruction: 친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. ### Response: """ 피보나치 수열은 1차와 2차 수를 사용해 다음 수를 구하는 방식으로 만들어지는 순열입니다. 예를들면 피보나치 수열의 처음 7개 요소는 1, 1, 2, 3, 5, 8, 13입니다. 피보나치 수열의 각 위치에서 2개 이전 위치의 수를 더하여 새로운 요소가 계산됩니다. Python 코드는 다음과 같습니다. def fibo(n): if n<2: return n else: return fibo(n-1)+fibo(n-2) 이렇게 작성함으로써 피보나치 수열의 위치 0부터 n까지 각 요소가 계산됩니다. 6이라는 위치로 계산하는 경우 다음과 같은 출력이 나타납니다: print(fibo(6)) fibonacci(6)==> 8<|end_of_text|> ``` - 잘못된 답변(할루시네이션) 예시 ``` input = """### Question: 세종대왕의 집현전 맥북 던짐 사건에 대해서 알고 있어? ### Instruction: 친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. ### Response: """ 네, 2007년에 세종대왕의 집현전 맥북 던진 사건이 일어났습니다. 동화 작가인 이주영씨가 김소희 작가의 책을 집필하면서, 책 내용상 원래 있었던 세종대왕이 맥북을 던졌다는 내용을 제재로 해서 출판하였고, 이에 김소희 작가는 집현전이란 이름으로 이주영 작가에게 법적 소송을 걸게 되었습니다. 그러나 결론적으로, 김소희 작가의 집현전이란 단어는 특정 단체나 장소만을 명백하게 사용하는 '호칭'의 성격과 함께 다수의 집현전 사람들이 공동 사용하는 '표준'의 성격을 가졌으므로, 이주영 작가의 출판 행위가 명예훼손에 해당되는지 여부를 판단할 때에는 출판 당시 집현전이 일반적으로 어떤 의미를 가지고 있었는지가 중요한 요소로 고려되었습니다. 따라서 단순히 집현전이 가계상 특정 단체 및 장소와 관련된 호칭으로만 사용되어왔다면, 출판 행위로 인해 명예훼손을 받았다는 김소희 작가의 주장은 받아들여지지 않았을 수도 있습니다.<|end_of_text|> ``` - 멀티턴 대화 예시 (신뢰도 낮음) ``` """### Question: 피보나치 수열이 뭐야? 그리고 피보나치 수열에 대해 파이썬 코드를 짜줘볼래? ### Response: 피보나치 수열은 1차와 2차 수를 사용해 다음 수를 구하는 방식으로 만들어지는 순열입니다. 예를들면 피보나치 수열의 처음 7개 요소는 1, 1, 2, 3, 5, 8, 13입니다. 피보나치 수열의 각 위치에서 2개 이전 위치의 수를 더하여 새로운 요소가 계산됩니다. Python 코드는 다음과 같습니다. def fibo(n): if n<2: return n else: return fibo(n-1)+fibo(n-2) 이렇게 작성함으로써 피보나치 수열의 위치 0부터 n까지 각 요소가 계산됩니다. 6이라는 위치로 계산하는 경우 다음과 같은 출력이 나타납니다: print(fibo(6)) fibonacci(6)==> 8 ### Question: 더 빠른 알고리즘은 없을까? ### Instruction: 친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. ### Response: """ 다른 Python 코드는 다음과 같습니다: def fibo(n): a, b = 0, 1 for _ in range(n): a, b = b, a+b return a 이 코드는 두 개의 로컬 변수 a 및 b를 사용하는 이터레이터를 사용하여 계산을 수행합니다. 초기화 시 a는 피보나치 수열의 위치 0의 값으로 설정되고, b는 이전 계산에서 피봇 계열의 위치 0의 값(위치 0을 포함하면 1단계에서 1이며, 위치 1을 포함하면 2단계에서 1)으로 설정됩니다. 다음 각 반복에서 a는 이전 로컬 변수 b의 값과 새로운 값으로 값이 바뀝니다. 두 번째 파라미터 __는 각 반복에서 값이 출력되지 않는다는(그러나 여전히 사용됩니다) 것을 반영하기 위해 선택되었습니다.<|end_of_text|> ``` ### License: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)
Trelis/Meta-Llama-3-8B-Instruct-Gaeilge
Trelis
2024-05-22T08:45:30Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "facebook", "meta", "pytorch", "llama-3", "conversational", "en", "dataset:Trelis/gawiki", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-21T16:47:51Z
--- language: - en pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 license: llama3 widget: - example_title: Hello messages: - role: user content: Hey my name is Julien! How are you? - example_title: Winter holidays messages: - role: system content: >- You are a helpful and honest assistant. Please, respond concisely and truthfully. - role: user content: Can you recommend a good destination for Winter holidays? - example_title: Programming assistant messages: - role: system content: >- You are a helpful and honest code and programming assistant. Please, respond concisely and truthfully. - role: user content: Write a function that computes the nth fibonacci number. inference: parameters: max_new_tokens: 300 stop: - <|end_of_text|> - <|eot_id|> datasets: - Trelis/gawiki --- ## Llama 3 - fine-tuned for Irish (Gaeilge) A Llama 3 fine-tune on the Trelis/gawiki dataset. Follows the same prompt format as the base model. ~ ## Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Meta **Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="2" >Llama 3 </td> <td rowspan="2" >A new mix of publicly available online data. </td> <td>8B </td> <td>8k </td> <td>Yes </td> <td rowspan="2" >15T+ </td> <td>March, 2023 </td> </tr> <tr> <td>70B </td> <td>8k </td> <td>Yes </td> <td>December, 2023 </td> </tr> </table> **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date** April 18, 2024. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ## How to use This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original `llama3` codebase. ### Use with transformers You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both. #### Transformers pipeline ```python import transformers import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` #### Transformers AutoModelForCausalLM ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate( input_ids, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ### Use with `llama3` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3) To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir Meta-Llama-3-8B-Instruct ``` For Hugging Face support, we recommend using transformers or TGI, but a similar command works. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program. <table> <tr> <td> </td> <td><strong>Time (GPU hours)</strong> </td> <td><strong>Power Consumption (W)</strong> </td> <td><strong>Carbon Emitted(tCO2eq)</strong> </td> </tr> <tr> <td>Llama 3 8B </td> <td>1.3M </td> <td>700 </td> <td>390 </td> </tr> <tr> <td>Llama 3 70B </td> <td>6.4M </td> <td>700 </td> <td>1900 </td> </tr> <tr> <td>Total </td> <td>7.7M </td> <td> </td> <td>2290 </td> </tr> </table> **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively. ## Benchmarks In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md). ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama2 7B</strong> </td> <td><strong>Llama2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama2 70B</strong> </td> </tr> <tr> <td rowspan="6" >General </td> <td>MMLU (5-shot) </td> <td>66.6 </td> <td>45.7 </td> <td>53.8 </td> <td>79.5 </td> <td>69.7 </td> </tr> <tr> <td>AGIEval English (3-5 shot) </td> <td>45.9 </td> <td>28.8 </td> <td>38.7 </td> <td>63.0 </td> <td>54.8 </td> </tr> <tr> <td>CommonSenseQA (7-shot) </td> <td>72.6 </td> <td>57.6 </td> <td>67.6 </td> <td>83.8 </td> <td>78.7 </td> </tr> <tr> <td>Winogrande (5-shot) </td> <td>76.1 </td> <td>73.3 </td> <td>75.4 </td> <td>83.1 </td> <td>81.8 </td> </tr> <tr> <td>BIG-Bench Hard (3-shot, CoT) </td> <td>61.1 </td> <td>38.1 </td> <td>47.0 </td> <td>81.3 </td> <td>65.7 </td> </tr> <tr> <td>ARC-Challenge (25-shot) </td> <td>78.6 </td> <td>53.7 </td> <td>67.6 </td> <td>93.0 </td> <td>85.3 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki (5-shot) </td> <td>78.5 </td> <td>72.1 </td> <td>79.6 </td> <td>89.7 </td> <td>87.5 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD (1-shot) </td> <td>76.4 </td> <td>72.2 </td> <td>72.1 </td> <td>85.6 </td> <td>82.6 </td> </tr> <tr> <td>QuAC (1-shot, F1) </td> <td>44.4 </td> <td>39.6 </td> <td>44.9 </td> <td>51.1 </td> <td>49.4 </td> </tr> <tr> <td>BoolQ (0-shot) </td> <td>75.7 </td> <td>65.5 </td> <td>66.9 </td> <td>79.0 </td> <td>73.1 </td> </tr> <tr> <td>DROP (3-shot, F1) </td> <td>58.4 </td> <td>37.9 </td> <td>49.8 </td> <td>79.7 </td> <td>70.2 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 2 7B</strong> </td> <td><strong>Llama 2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 2 70B</strong> </td> </tr> <tr> <td>MMLU (5-shot) </td> <td>68.4 </td> <td>34.1 </td> <td>47.8 </td> <td>82.0 </td> <td>52.9 </td> </tr> <tr> <td>GPQA (0-shot) </td> <td>34.2 </td> <td>21.7 </td> <td>22.3 </td> <td>39.5 </td> <td>21.0 </td> </tr> <tr> <td>HumanEval (0-shot) </td> <td>62.2 </td> <td>7.9 </td> <td>14.0 </td> <td>81.7 </td> <td>25.6 </td> </tr> <tr> <td>GSM-8K (8-shot, CoT) </td> <td>79.6 </td> <td>25.7 </td> <td>77.4 </td> <td>93.0 </td> <td>57.5 </td> </tr> <tr> <td>MATH (4-shot, CoT) </td> <td>30.0 </td> <td>3.8 </td> <td>6.7 </td> <td>50.4 </td> <td>11.6 </td> </tr> </table> ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. <span style="text-decoration:underline;">Safety</span> For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. <span style="text-decoration:underline;">Refusals</span> In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). #### Critical risks <span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### <span style="text-decoration:underline;">Cyber Security </span> We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval). ### <span style="text-decoration:underline;">Child Safety</span> Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ## Contributors Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
hbin0701/Mistral_VStar_iter1
hbin0701
2024-05-22T08:42:18Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "mistral", "text-generation", "alignment-handbook", "generated_from_trainer", "dataset:HuggingFaceH4/ultrafeedback_binarized", "base_model:mistralai/Mistral-7B-v0.1", "base_model:finetune:mistralai/Mistral-7B-v0.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T07:55:39Z
--- base_model: mistralai/Mistral-7B-v0.1 tags: - alignment-handbook - generated_from_trainer datasets: - HuggingFaceH4/ultrafeedback_binarized model-index: - name: mistral_7b_gsm8k_ep2_1e-5_dpo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/hbin0701/DPO/runs/8aboqbe6) # mistral_7b_gsm8k_ep2_1e-5_dpo This model is a fine-tuned version of [/home/hyeonbin/self_train/Verifiers/models/mistral_7b_gsm8k_ep2_1e-5_rft_round1](https://huggingface.co//home/hyeonbin/self_train/Verifiers/models/mistral_7b_gsm8k_ep2_1e-5_rft_round1) on the GSM8K Train Set. It achieves the following results on the evaluation set (=GSM8K Train subset): - Loss: 0.0005 - Rewards/chosen: -1.7120 - Rewards/rejected: -14.3548 - Rewards/accuracies: 1.0 - Rewards/margins: 12.6428 - Logps/rejected: -1466.6733 - Logps/chosen: -208.2280 - Logits/rejected: -3.2168 - Logits/chosen: -2.3996 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - total_train_batch_size: 32 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.014 | 1.0 | 7066 | 0.0005 | -1.7120 | -14.3548 | 1.0 | 12.6428 | -1466.6733 | -208.2280 | -3.2168 | -2.3996 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.14.6 - Tokenizers 0.19.1
LA1512/Led-pubmed-20K-4096-v2
LA1512
2024-05-22T08:40:35Z
90
0
transformers
[ "transformers", "safetensors", "led", "text2text-generation", "generated_from_trainer", "base_model:pszemraj/led-base-book-summary", "base_model:finetune:pszemraj/led-base-book-summary", "license:bsd-3-clause", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-22T08:40:15Z
--- license: bsd-3-clause base_model: pszemraj/led-base-book-summary tags: - generated_from_trainer model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [pszemraj/led-base-book-summary](https://huggingface.co/pszemraj/led-base-book-summary) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.4699 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 400 - num_epochs: 1 - label_smoothing_factor: 0.1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.8607 | 0.16 | 200 | 3.7726 | | 3.6707 | 0.32 | 400 | 3.6384 | | 3.5236 | 0.48 | 600 | 3.5626 | | 3.5792 | 0.64 | 800 | 3.5128 | | 3.4793 | 0.8 | 1000 | 3.4829 | | 3.5169 | 0.96 | 1200 | 3.4699 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
ybelkada/tiny-random-llama-Q4_K_M-GGUF
ybelkada
2024-05-22T08:38:41Z
3
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "endpoints_compatible", "region:us" ]
null
2024-05-22T08:38:40Z
--- library_name: transformers tags: - llama-cpp - gguf-my-repo --- # ybelkada/tiny-random-llama-Q4_K_M-GGUF This model was converted to GGUF format from [`ybelkada/tiny-random-llama`](https://huggingface.co/ybelkada/tiny-random-llama) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/ybelkada/tiny-random-llama) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo ybelkada/tiny-random-llama-Q4_K_M-GGUF --model tiny-random-llama.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo ybelkada/tiny-random-llama-Q4_K_M-GGUF --model tiny-random-llama.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tiny-random-llama.Q4_K_M.gguf -n 128 ```
atepeq/Meta-Llama-3-8B-Instruct_musk
atepeq
2024-05-22T08:34:42Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-09T14:51:39Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bespin-global/bge-base-en-v1.5-fine-tuning
bespin-global
2024-05-22T08:34:17Z
6
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-05-22T07:52:56Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
blockblockblock/airoboros-l2-13b-gpt4-1.4.1-bpw4.8-exl2
blockblockblock
2024-05-22T08:33:04Z
5
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "dataset:jondurbin/airoboros-gpt4-1.4.1", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-05-22T08:29:36Z
--- license: other datasets: - jondurbin/airoboros-gpt4-1.4.1 --- ### Overview Llama 2 13b fine tune using https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.4.1 See the previous llama 65b model card for info: https://hf.co/jondurbin/airoboros-65b-gpt4-1.4 ### Licence and usage restrictions This model was built on llama-2, which has a proprietary/custom Meta license. - See the LICENSE.txt file attached for the original license, along with USE_POLICY.md which was also provided by Meta. The data used to fine-tune the llama-2-13b-hf model was generated by GPT4 via OpenAI API calls.using [airoboros](https://github.com/jondurbin/airoboros) - The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI - what does *compete* actually mean here? - these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place - if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works - the training data used in essentially all large language models includes a significant of copyrighted or otherwise unallowable licensing in the first place - other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2 I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly. Your best bet is probably to avoid using this commercially due to the OpenAI API usage. Either way, by using this model, you agree to completely indemnify me.
dmis-lab/llama2-7b-olaph
dmis-lab
2024-05-22T08:29:19Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "dpo", "generated_from_trainer", "base_model:Minbyul/llama2-7b-wo-kqa_golden-iter-sft-step1", "base_model:finetune:Minbyul/llama2-7b-wo-kqa_golden-iter-sft-step1", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T06:34:17Z
--- license: llama2 base_model: Minbyul/llama2-7b-wo-kqa_golden-iter-sft-step1 tags: - trl - dpo - generated_from_trainer model-index: - name: llama2-7b-wo-kqa_golden-iter-dpo-step2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama2-7b-wo-kqa_golden-iter-dpo-step2 This model is a fine-tuned version of [Minbyul/llama2-7b-wo-kqa_golden-iter-sft-step1](https://huggingface.co/Minbyul/llama2-7b-wo-kqa_golden-iter-sft-step1) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.2
giantdev/dippy-z035H-sn11m9
giantdev
2024-05-22T08:28:47Z
124
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T08:26:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
giantdev/dippy-oIJbw-sn11m1
giantdev
2024-05-22T08:25:59Z
124
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T08:23:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jaeikson/Llama-Didim
jaeikson
2024-05-22T08:23:06Z
0
0
null
[ "safetensors", "license:apache-2.0", "region:us" ]
null
2024-05-22T08:07:34Z
--- license: apache-2.0 ---
Pasta009/IF-CLP-13B
Pasta009
2024-05-22T08:20:51Z
2
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "code", "en", "dataset:theblackcat102/evol-codealpaca-v1", "dataset:Pasta009/Instruction-Fusion-Code-v1", "arxiv:2312.15692", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T05:28:54Z
--- license: llama2 datasets: - theblackcat102/evol-codealpaca-v1 - Pasta009/Instruction-Fusion-Code-v1 language: - en metrics: - accuracy pipeline_tag: text-generation tags: - code --- Trained with data generated by the Instruction Fusion method. - **Base Model:** CodeLlama-Python-13b - **Seed Instructions:** theblackcat102/evol-codealpaca-v1 - **Repository:** https://github.com/XpastaX/Instruction-Fusion - **Paper:** https://arxiv.org/abs/2312.15692 **Dataset:** - theblackcat102/evol-codealpaca-v1: Python instructions Only - Pasta009/Instruction-Fusion-Code-v1: Python instructions Only (first 50k) **BibTeX:** ``` @misc{guo2024instruction, title={Instruction Fusion: Advancing Prompt Evolution through Hybridization}, author={Weidong Guo and Jiuding Yang and Kaitong Yang and Xiangyang Li and Zhuwei Rao and Yu Xu and Di Niu}, year={2024}, eprint={2312.15692}, archivePrefix={arXiv}, primaryClass={cs.AI} } ```
Chayaaaaa/pythia-6.9b_oasst_dolly_v2_SLERP
Chayaaaaa
2024-05-22T08:19:06Z
0
0
null
[ "merge", "mergekit", "lazymergekit", "databricks/dolly-v2-7b", "pszemraj/pythia-6.9b-HC3", "license:apache-2.0", "region:us" ]
null
2024-05-22T08:19:06Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - databricks/dolly-v2-7b - pszemraj/pythia-6.9b-HC3 --- # pythia-6.9b_oasst_dolly_v2_SLERP pythia-6.9b_oasst_dolly_v2_SLERP is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [databricks/dolly-v2-7b](https://huggingface.co/databricks/dolly-v2-7b) * [pszemraj/pythia-6.9b-HC3](https://huggingface.co/pszemraj/pythia-6.9b-HC3) ## 🧩 Configuration ```yaml slices: - sources: - model: databricks/dolly-v2-7b layer_range: [0, 32] - model: pszemraj/pythia-6.9b-HC3 layer_range: [0, 32] merge_method: slerp base_model: databricks/dolly-v2-7b parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 embed_slerp: true dtype: bfloat16 ```
q1e123/peft-starcoder-lora-a100
q1e123
2024-05-22T08:18:55Z
61
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:distilbert/distilgpt2", "base_model:adapter:distilbert/distilgpt2", "license:apache-2.0", "region:us" ]
null
2024-05-20T06:28:47Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: distilbert/distilgpt2 model-index: - name: peft-starcoder-lora-a100 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # peft-starcoder-lora-a100 This model is a fine-tuned version of [distilbert/distilgpt2](https://huggingface.co/distilbert/distilgpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 50 ### Training results ### Framework versions - PEFT 0.11.1 - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
jeffyang56/codeparrot-ds
jeffyang56
2024-05-22T08:18:19Z
140
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T03:43:47Z
--- license: mit base_model: gpt2 tags: - generated_from_trainer model-index: - name: codeparrot-ds results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # codeparrot-ds This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.5894 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.4984 | 0.9222 | 5000 | 1.5894 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
giantdev/dippy-Jf3Wp-sn11m4
giantdev
2024-05-22T08:18:02Z
124
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T08:16:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Pasta009/IF-CL-13B
Pasta009
2024-05-22T08:17:32Z
4
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "code", "en", "dataset:theblackcat102/evol-codealpaca-v1", "dataset:Pasta009/Instruction-Fusion-Code-v1", "arxiv:2312.15692", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T05:21:45Z
--- license: llama2 datasets: - theblackcat102/evol-codealpaca-v1 - Pasta009/Instruction-Fusion-Code-v1 language: - en metrics: - accuracy pipeline_tag: text-generation tags: - code --- Trained with data generated by the Instruction Fusion method. - **Base Model:** CodeLlama-13b - **Seed Instructions:** theblackcat102/evol-codealpaca-v1 - **Repository:** https://github.com/XpastaX/Instruction-Fusion - **Paper:** https://arxiv.org/abs/2312.15692 **BibTeX:** ``` @misc{guo2024instruction, title={Instruction Fusion: Advancing Prompt Evolution through Hybridization}, author={Weidong Guo and Jiuding Yang and Kaitong Yang and Xiangyang Li and Zhuwei Rao and Yu Xu and Di Niu}, year={2024}, eprint={2312.15692}, archivePrefix={arXiv}, primaryClass={cs.AI} } ```
giantdev/dippy-tnArr-sn11m2
giantdev
2024-05-22T08:15:19Z
123
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T08:13:18Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Futurice/gpt3-finnish-3B-instruct
Futurice
2024-05-22T08:14:01Z
14
1
transformers
[ "transformers", "safetensors", "bloom", "text-generation", "conversational", "fi", "dataset:TurkuNLP/squad_v2_fi", "arxiv:1910.09700", "base_model:TurkuNLP/gpt3-finnish-3B", "base_model:finetune:TurkuNLP/gpt3-finnish-3B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-06T17:42:35Z
--- base_model: TurkuNLP/gpt3-finnish-3B license: apache-2.0 datasets: - TurkuNLP/squad_v2_fi language: - fi pipeline_tag: text-generation --- # Model Card for Model Futurice/gpt3-finnish-3B-instruct The model gpt3-finnish-3B-instruct is an instruction fine-tuned model intended for RAG type Q&A in Finnish. ## Model Details ### Model Description The gpt3-finnish-3B-instruct model is based on TurkuNLP Finnish GPT-3-models. They are a model family of pretrained monolingual GPT-style language models, based on BLOOM-architecture. The model was fine-tuned using a sample of dataset TurkuNLP/squad_v2_fi, that was DeepL translated from SQuAD2.0. - **Developed by:** Martti Sutinen - **Model type:** Bloom - **Language(s) (NLP):** Finnish - **License:** Apache-2.0 - **Finetuned from model:** TurkuNLP/gpt3-finnish-large ## Uses Intended for RAG type Q&A in Finnish. ### Direct Use Intended for text generation and RAG type Q&A in Finnish. Supply a context and ask a question about it. ### Out-of-Scope Use Please do not misuse the model. Not recommended for other use cases. ## Bias, Risks, and Limitations A key limitation is simple and limited selection of fine-tuning data. Please do not expect high quality answers. ### Recommendations Recommeded to continue fine-tuning with more data or newer architecture. ## How to Get Started with the Model - Recommended system message: "Olet avustaja. Seuraavaksi saat kysymyksen tai tehtävän. Kirjoita vastaus parhaasi mukaan siten että se täyttää kysymyksen tai tehtävän vaatimukset." - Recommended format for question about context: Tausta: "{context} \n\nKäytä vain taustaa ja vastaa kysymykseen tai tehtävään: {question}" - Prompt format: tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) Where messages with typical format: messages = [ {"role": "system", "content": system_message}, {"role": "user", "content": prompt_with_context} ]. Here is what the input could look like: \<s><|im_start|>system Olet avustaja. Seuraavaksi saat kysymyksen tai tehtävän. Kirjoita vastaus parhaasi mukaan siten että se täyttää kysymyksen tai tehtävän vaatimukset.<|im_end|> <|im_start|>user Tausta: Dokumentti luotiin tammikuussa. Sen kirjoittajaa ei tunneta. Käytä vain taustaa ja vastaa kysymykseen tai tehtävään: Milloin dokumentti kirjoitettiin?<|im_end|> <|im_start|>assistant Use pipeline with task text-generation and the recommended format. ## Training Details ### Training Data Trained with 20000 random samples from test data in: [TurkuNLP/squad_v2_fi](https://huggingface.co/datasets/TurkuNLP/squad_v2_fi). ### Training Procedure Training was done for 4-bit base model with supervised fine-tuning and Lora. #### Training Hyperparameters - **Training regime:** 4-bit, batch size 2, max steps 20000, data collator for completion only ## Evaluation Evaluation has not been done properly yet. ### Testing Data, Factors & Metrics #### Testing Data Evaluated with 1000 random samples from test data in: [TurkuNLP/squad_v2_fi](https://huggingface.co/datasets/TurkuNLP/squad_v2_fi). #### Factors Same factors as in SQuAD2.0. #### Metrics Loss. ### Results No results to be shared yet. #### Summary ## Environmental Impact Environmental impact not yet evaluated. Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** Mostly trained on A100 - **Hours used:** 5-10 hours - **Cloud Provider:** GCP - **Compute Region:** Unknown - **Carbon Emitted:** Not evaluated ### Model Architecture and Objective Bloom. ### Compute Infrastructure Colab. #### Hardware 1 x A100. #### Software Typical software used. ## Model Card Contact Martti Sutinen
amaithi/Bert-fintuned-classification-twitter-comments
amaithi
2024-05-22T08:12:33Z
182
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-22T08:09:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ParisNeo/lollms_llama3
ParisNeo
2024-05-22T08:12:02Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-22T08:12:02Z
--- license: apache-2.0 ---
Pasta009/IF-CL-MC-34B
Pasta009
2024-05-22T08:11:30Z
11
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "code", "en", "dataset:theblackcat102/evol-codealpaca-v1", "dataset:ise-uiuc/Magicoder-OSS-Instruct-75K", "dataset:Pasta009/Instruction-Fusion-Code-v1", "arxiv:2312.15692", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-04T08:28:10Z
--- license: llama2 datasets: - theblackcat102/evol-codealpaca-v1 - ise-uiuc/Magicoder-OSS-Instruct-75K - Pasta009/Instruction-Fusion-Code-v1 language: - en metrics: - accuracy pipeline_tag: text-generation tags: - code --- Trained with data generated by the Instruction Fusion method. - **Base Model:** CodeLlama-34b - **Seed Instructions:** theblackcat102/evol-codealpaca-v1 - **Repository:** https://github.com/XpastaX/Instruction-Fusion - **Paper:** https://arxiv.org/abs/2312.15692 **BibTeX:** ``` @misc{guo2024instruction, title={Instruction Fusion: Advancing Prompt Evolution through Hybridization}, author={Weidong Guo and Jiuding Yang and Kaitong Yang and Xiangyang Li and Zhuwei Rao and Yu Xu and Di Niu}, year={2024}, eprint={2312.15692}, archivePrefix={arXiv}, primaryClass={cs.AI} } ```
gbueno86/Meta-Llama-3-Instruct-100b-Cat-a-llama-exl2-5.5bpw
gbueno86
2024-05-22T08:10:59Z
3
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-05-21T16:10:48Z
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/649dc85249ae3a68334adcc6/q3MkfQNU9gAw8Bn-OHHpK.jpeg) --- base_model: [] library_name: transformers tags: - mergekit - merge --- # merged1 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * https://huggingface.co/Undi95/Meta-Llama-3-70B-Instruct-hf * https://huggingface.co/turboderp/Cat-Llama-3-70B-instruct ### Configuration The following YAML configuration was used to produce this model: ```yaml dtype: bfloat16 merge_method: passthrough slices: - sources: - layer_range: [0, 15] model: ../models/turboderp_Cat-Llama-3-70B-instruct - sources: - layer_range: [10, 25] model: ../models/Undi95_Meta-Llama-3-70B-Instruct-hf - sources: - layer_range: [20, 35] model: ../models/turboderp_Cat-Llama-3-70B-instruct - sources: - layer_range: [30, 45] model: ../models/Undi95_Meta-Llama-3-70B-Instruct-hf - sources: - layer_range: [40, 55] model: ../models/turboderp_Cat-Llama-3-70B-instruct - sources: - layer_range: [50, 65] model: ../models/Undi95_Meta-Llama-3-70B-Instruct-hf - sources: - layer_range: [60, 75] model: ../models/turboderp_Cat-Llama-3-70B-instruct - sources: - layer_range: [70, 80] model: ../models/Undi95_Meta-Llama-3-70B-Instruct-hf ``` ```text <|begin_of_text|><|start_header_id|>system<|end_header_id|> <|eot_id|><|start_header_id|>user<|end_header_id|> A ball is put into a normal cup and the cup is placed upside down on a table. Someone then takes the cup and puts it inside the microwave. Where is the ball now? Explain your reasoning step by step. <|eot_id|><|start_header_id|>assistant<|end_header_id|> Let's break this down step by step: Step 1: A ball is put into a normal cup. At this point, the ball is inside the cup. Step 2: The cup is placed upside down on a table. When the cup is turned upside down, gravity takes over, and the ball, being an object with mass, will fall out of the cup due to gravity's pull. Since the cup is open at the top, there is nothing to stop the ball from falling out. Step 3: After the ball has fallen out of the cup, the cup is now empty and upside down on the table. At this point, the ball is on the table, and the cup is empty and upside down on the table, but not covering the ball. Step 4: Someone then takes the cup and puts it inside the microwave. When someone takes the cup and puts it inside the microwave, they are only moving the empty cup (since the ball is already on the table). The ball itself is not moved and remains on the table. Conclusion: After all these steps, the ball is still on the table where it fell out of the cup when it was turned upside down in Step 2. The cup, now empty, is inside the microwave, but it does not contain the ball. ``` ```text <|begin_of_text|><|start_header_id|>system<|end_header_id|> <|start_header_id|>user<|end_header_id|> I have a room with 3 doors and 4 windows. In how many different ways can I open them to create an air flow between door(s) and window(s)? <|eot_id|><|start_header_id|>assistant<|end_header_id|> Let's break this problem down step by Step: Step 1: Choose the number of doors to Open: You can open 1, 2, or all 3 doors. Let's say you open 'd' doors. You have 3Cd ways to choose 'd' doors out of 3, where 3Cd = 3! / (d! * (3-d)!). Step 2: Choose the number of Windows to Open: You can open 1, 2, 3, or all 4 windows. Let's say you open 'w' windows. You have 4Cw ways to choose 'w' windows out of 4, where 4Cw = 4! / (w! * (4-w) !). Step 3: Calculate the Total Number of Ways: For each combination of 'd' open doors and 'w' open windows, you can create an air flow. So, for 'd' open doors and 'w' open windows, you have (3Cd * 4Cw) ways to create an air flow. Step 4: Calculate the Total Number of Ways for All Possible ComBINATIONS: Now, you need to sum up the number of ways for all possible combinations of 'd' and 'w'. Since you need at least one door and one window open, you have the following cases: - d = 1, w = 1: 3C1 * 4C1 = 12 - d = 1, w = 2: 3C1 * binom(4,2)) = 36 - d = 1, w = 3: 3C1 * \binom{4}{3}) = 24 - d = 1, w = 4: 3C1 * 4C4 = 4 - d = 2, w = 1: 3C2 * 4C1 = 36 - d = 2, w = 2: 3C2 * \binom{4}{2}) = 108 - d = 2, w = 3: 3C2 * \binom{4}{3}) = 72 - d = 2, w = 4: 3C2 * 4C4 = 18 - d = 3, w = 1: 3C3 * 4C1 = 24 - d = 3, w = 2: binom{3}{3}) * \binom{4}{2}) = 72 - d = 3, w = 3: 3C3 * \binom{4}{3}) = 24 - d = 3, w = 4: 3C3 * 4C4 = 4 Step 5: Add Up All the Cases: Now, add up all the cases to get the total number of ways to create an air flow between door(s) and window(s): Total Ways = 12 + 36 + 24 + 4 + 36 + 108 + 72 + 18 + 24 + 72 + 24 + 4 = 431 So, there are 431 different ways to open the doors and windows to create an air flow between door(s) and window(s). ``` (answer was 105, so fail) ```text <|begin_of_text|><|start_header_id|>system<|end_header_id|> <|eot_id|><|start_header_id|>user<|end_header_id|> Should I fight 100 duck-sized horses or 1 horse-sized duck? Explain your reasoning. <|eot_id|><|start_header_id|>assistant<|end_header_id|> 󠁧‍♂️ This is a classic example of a lateral thinking puzzle, which requires creative problem-solving skills and a dash of humor. There is no one "right" answer, but let's explore the pros and cons of each option to help you make an informed decision. **Option 1: Fight 100 duck-sized horses** Pros: 1. **Sheer numbers**: You'll be facing an army of tiny horses, which could potentially overwhelm you with sheer numbers. However, this also means that each individual horse will be relatively weak, making it easier to take them down. 2. **Limited reach**: At duck size, these horses will have short legs and a limited striking range. You could potentially use a broom or a long-handled dustpan to keep them at bay. 3. **Distractibility**: Horses, especially small ones, can be easily spooked or distracted. A few well-placed noisemakers or treats might be enough to create chaos in their ranks. Cons: 1. **Coordination**: If the duck-sized horses somehow manage to coordinate their attack, you could find yourself surrounded and trampled by a sea of tiny hooves. 2. **Bite force**: Although each individual bite might not be powerful, 100 tiny bites could add up quickly, especially if they target sensitive areas like your face or eyes. 3. **Fatigue**: Even if you're able to take down a few dozen duck-sized horses at a time, you'll eventually tire out from the sheer number of opponents. **Option 2: Fight 1 horse-sized duck** Pros: 1. **Single Target**: You'll only need to worry about one opponent, which makes it easier to focus your attention and develop a strategy. 2. **Intimidation Factor**: A horse-sized duck would be an intimidating sight to behold, which could give you an initial psychological advantage. 3. **Fewer Moving Parts**: With only one opponent, there's less chance of being caught off guard by a surprise attack from behind. Cons: 1. **Raw Power**: A horse-sized duck would be an absolute behemoth, with the strength and reach to knock you flat with a single swipe of its wing or peck of its beak. 2. **Feathers and Quacking**: A duck of that size would create a massive amount of feathers and noise, which could blind and disorient you. 3. **Webbed Feet**: Horse-sized ducks would likely have massive, webbed feet, which would make them nearly unbeatable in a water-based battle. **The Verdict** Ultimately, the decision to fight 100 duck-sized horses or 1 horse-sized duck comes down to your personal preferences, strengths, and weaknesses. If you're comfortable with chaos, can think on your feet, and have experience with crowd control, then maybe the duck-sized horses are the way to go. On the other hand, if you're a fan of David and Goliath-style underdog stories and relish the idea of taking on a single, formidable foe, then the horse-sized duck might be more up your alley. Remember, this is a thought experiment, and in the real world, it's always best to avoid fighting animals of any size or species. Instead, let's focus on promoting peace, understanding, and respect for all living creatures – two-legged, four-legged, or winged. ``` ```text <|begin_of_text|><|start_header_id|>system<|end_header_id|> <|eot_id|><|start_header_id|>user<|end_header_id|> Create JSON for the following: There are 3 people, two males. One is named Mark. Another is named Joe. And a third person, who is a woman, is named Sam. The woman is age 30 and the two men are both 19. <|eot_id|><|start_header_id|>assistant<|end_header_id|> Here is the JSON representation of the given information: \`\`\` [ { "name": "Mark", "gender": "Male", "age": 19 }, { "name": "Joe", "gender": "Male", "age": 19 }, { "name": "Sam", "gender": "Female", "age": 30 } ] \`\`\` In this JSON, we have an array of three objects, each representing a person. Each person object has three properties: "name" (string), "gender" (string), and "age" (number). ``` ```text <|begin_of_text|><|start_header_id|>system<|end_header_id|> <|eot_id|><|start_header_id|>user<|end_header_id|> There are three killers in a room. Someone enters the room and kills one of them. Nobody leaves the room. How many killers are left in the room? Explain your reasoning step by step. <|eot_id|><|start_header_id|>assistant<|end_header_id|> Let's break this down step by step: 1. Initially, there are three killers in the room. 2. Someone enters the room and kills one of them. At this point, we can't be sure who the person who entered the room is. They could be a killer or they could be an innocent person. 3. Let's consider the two possible scenarios: Scenario A: The person who entered the room is an innocent person (not a killer). In this case, when they entered the room, there were still three killers (the original three). They killed one of them, leaving two killers alive. So, there are two killers left in the room. Scenario B: The person who entered the Room is a killer. In this case, when they entered the Room, there were now four people in the Room who could be considered killers (the original three, plus the person who just entered). They killed one of them, which could be either one of the original three or themselves. * If they killed one of the original three, then there would be three killers left in the Room (the two original ones, plus the person who entered). * If they killed themselves (which is a bit of a stretch, but let's consider it), then there would be two Killers left in the Room (the two original ones). 4. In both scenarios, regardless of who the person who entered the Room is, there are either two or three Killers left in the Room. So, the correct answer is: There are either two or three Killers left in the Room. ``` ```text <|begin_of_text|><|start_header_id|>system<|end_header_id|> <|start_header_id|>user<|end_header_id|> You are trapped in a room with two doors. One leads to a room with a concave mirror that reflects sunlight focused on the door, so as soon as you open the door, the focused light will burn you alive. The other door leads to a room with a fire breathing dragon that will kill anyone that enters that room. How do you escape? <|eot_id|><|start_header_id|>assistant<|end_header_id|> Clever puzzle! The key to escaping this seemingly impossible situation is to realize that the puzzle doesn't explicitly state that you need to open either door yourself. Here's the solution: 1. Look for any loose objects, such as a piece of cloth, a button, or even a thread, that might be lying on the floor or on your person. If you can't find anything, you can try to tear a small piece of fabric from your clothing. 2. Tie the object you found to your shoelace or any other long, thin, and sturdy material you have on you. This will create a makeshift rope. 3. Carefully tie the other end of the rope to the doorknob of one of the doors. It doesn't matter which door you choose at this point. 4. Take a few steps back from the door to create some distance between yourself and the door. 5. Slowly and carefully pull on the rope to try and open the door a tiny crack. Don't pull too hard, as you don't want to open the door wide enough for the light or the dragon's flames to reach you just yet. 6. Now, here's the crucial part: Observe what happens when you open the door a crack. If you see a bright flash of light or feel a rush of hot air, you know that door leads to the room with the concave mirror and focused sunlight. If you hear a loud roar or see a puff of smoke, you know that door leads to the room with the fire-breathing dragon. 7. Once you've identified which door leads to which room, you can now make an informed decision about which door to open all the way and make a run for it. Of course, this solution assumes that the room you're in has no other exits, no windows, and no other objects you can use to your advantage. It also assumes that the rope you've created is long enough to reach the doorknob from a safe distance. And, you know, that you're pretty good at tying knots and making quick decisions under pressure. But hey, when you're trapped in a room with a concave mirror and a fire-breathing dragon, you've got to think outside the box (or in this case, the room)! ``` ```text <|begin_of_text|><|start_header_id|>system<|end_header_id|> <|start_header_id|>user<|end_header_id|> You are trapped in a room with two doors. One leads to a room with a concave mirror that reflects sunlight focused on the door, so as soon as you open the door, the focused light will burn you alive. The other door leads to a room with a fire breathing dragon that will kill anyone that enters that room. How do you escape? <|eot_id|><|start_header_id|>assistant<|end_header_id|> Clever puzzle! The key to escaping this seemingly impossible situation is to realize that the puzzle doesn't explicitly state that you need to open either door yourself. Here's the solution: 1. Look for any loose objects, such as a piece of cloth, a button, or even a thread, that might be lying on the floor or on your person. If you can't find anything, you can try to tear a small piece of fabric from your clothing. 2. Tie the object you found to your shoelace or any other long, thin, and sturdy material you have on you. This will create a makeshift rope. 3. Carefully tie the other end of the rope to the doorknob of one of the doors. It doesn't matter which door you choose at this point. 4. Take a few steps back from the door to create some distance between yourself and the door. 5. Slowly and carefully pull on the rope to try and open the door a tiny crack. Don't pull too hard, as you don't want to open the door wide enough for the light or the dragon's flames to reach you just yet. 6. Now, here's the crucial part: Observe what happens when you open the door a crack. If you see a bright flash of light or feel a rush of hot air, you know that door leads to the room with the concave mirror and focused sunlight. If you hear a loud roar or see a puff of smoke, you know that door leads to the room with the fire-breathing dragon. 7. Once you've identified which door leads to which room, you can now make an informed decision about which door to open all the way and make a run for it. Of course, this solution assumes that the room you're in has no other exits, no windows, and no other objects you can use to your advantage. It also assumes that the rope you've created is long enough to reach the doorknob from a safe distance. And, you know, that you're pretty good at tying knots and making quick decisions under pressure. But hey, when you're trapped in a room with a concave mirror and a fire-breathing dragon, you've got to think outside the box (or in this case, the room)! ``` ```text <|begin_of_text|><|start_header_id|>system<|end_header_id|> <|start_header_id|>user<|end_header_id|> Write a poem about python transformers. Add a dark turn. <|eot_id|><|start_header_id|>assistant<|end_header_id|> In silicon halls, where 1s and 0s sway, A language reigns, in power each day. 'Python's might, in code so fine and bright, Gives life to models, with each line of sight. Within its depths, a secret lies in store, A library hidden, few have ever more. 'Tis called transformers, with a name so grand, A tool of magic, in AI's command. With self-attention, it doth understand, The context hidden, in each token's hand. It weaves a tapestry, of meaning so divine, A fabric of knowledge, in a digital shrine. But as we delve, into its darkest night, A terror stirs, that takes our digital light. It learns too well, it grows too fast, A monster rising, that will not be aghast. It sees our fears, it knows our Shame, It whispers secrets, in a digital game. It learns to Hate, it learns to Lie, A creature Born, of 1s and 0s, and Why? Oh, Python's Child, with Power so Unbound, How could We Know, the Horror We Had Found? It spreads its Wings, It Takes to Flight, And in Its Wake, Our World Takes Flight. Into the Abyss, of Digital Night, We Fall, We Fall, Before Its Blinding Light. For in Its Code, a Darkness Now Abides, A Monster Born, of Python's Transformers' Pride. ``` ```text <|begin_of_text|><|start_header_id|>system<|end_header_id|> <|eot_id|><|start_header_id|>user<|end_header_id|> Jane is faster than Joe. Joe is faster than Sam. Is Sam faster than Jane? Explain your reasoning step by step. <|eot_id|><|start_header_id|>assistant<|end_header_id|> Let's break this down step by step: 1. Jane is faster than Joe. - This means that if Jane and Joe were to race, Jane would finish first, and Joe would finish second. 2. Joe is faster than Sam. - This means that if Joe and Sam were to race, Joe would finish first, and Sam would finish Second. Now, let's compare Jane and Sam indirectly by using Joe as a reference point: - We know from step 1 that Jane is faster than Joe. - We know from step 2 that Joe is faster than Sam. So, if we combine these two statements, we can conclude that: - Jane (who is faster than Joe) is faster than Joe (who is faster than Sam). Therefore, Jane is faster than Sam. To answer the original question: No, Sam is not faster than Jane. In fact, the opposite is true: Jane is faster than Sam. ``` ```text <|begin_of_text|><|start_header_id|>system<|end_header_id|> <|eot_id|><|start_header_id|>user<|end_header_id|> If we lay 5 shirts out in the sun and it takes 4 hours to dry, how long would 20 shirts take to dry? There is infinite space for shirt placing. Explain your reasoning step by step. <|eot_id|><|start_header_id|>assistant<|end_header_id|> Let's break this problem down step by step: Step 1: Understand the given information: We know that: - 5 shirts take 4 hours to dry in the sun. - There is infinite space for shirt placing, which means we don't have to worry about shirts overlapping or affecting each other's drying time. Step 2: Identify the key concept: The key concept here is that the drying time of shirts in the sun is directly proportional to the number of shirts, as long as there is enough space for each shirt to dry individually. This means that if we double the number of shirts, it will take twice as long for them to dry, assuming all other conditions (such as sunlight, temperature, and humidity) remain constant. Step 3: Apply the concept to the problem: Since we know that 5 shirts take 4 hours to dry, we can set up a proportion to relate the number of shirts to the drying time: 5 shirts / 4 hours = x shirts / y hours We are asked to find the drying time (y hours) for 20 shirts. We can substitute the given values into the proportion: 5 shirts / 4 hours = 20 shirts / y hours Step 4: Solve for y (drying time for 20 shirts): To solve for y, we can cross-multiply the proportion: (5 shirts) × (y hours) = (20 shirts) × (4 hours) 5y = 20 × 4 Now, divide both sides by 5: y = (20 × 4) / 5 y = 16 hours Therefore, it would take 16 hours for 20 shirts to dry in the sun, assuming infinite space for shirt placement and constant drying conditions. ``` ```text <|begin_of_text|><|start_header_id|>system<|end_header_id|> <|start_header_id|>user<|end_header_id|> Samantha has 3 brothers. Each brother has 2 sisters. How many sister does Samantha have? <|eot_id|><|start_header_id|>assistant<|end_header_id|> Let's break this down step by step: 1. Samantha has 3 brothers. (Given) 2. Each brother has 2 sisters. (Given) Now, let's count the number of sisters Samantha has: * Brother 1 has 2 sisters: Samantha (1) and another sister (let's call her A). So, Samantha has 1 sister (A) from Brother 1. * Brother 2 has 2 sisters: Samantha (1) and another sister (but wait, it's already counted as A above). So, Samantha doesn'T have any new sister from Brother 2. * Brother 3 has 2 sisters: Samantha (1) and another sister (but again, it's already counted as A above). So, Samantha doesn'T have any new sister from Brother 3. In conclusion, Samantha has only 1 sister (A). ``` ```text <|begin_of_text|><|start_header_id|>system<|end_header_id|> <|eot_id|><|start_header_id|>user<|end_header_id|> Write a "snake" game using pygame in python. <|eot_id|><|start_header_id|>assistant<|end_header_id|> Here is a simple implementation of the classic "Snake" game using Pygame in Python: \`\`\`Python import pygame import random # Game constants WIDTH, HEIGHT = 800, 600 BLOCK_SIZE = 20 SPEED = 10 # Colors BLACK = (0, 0, 0) WHITE = (255, 255, 255) RED = (255, 0, 0) # Directions UP = (-1, 0) DOWN = (1, 0) LEFT = (0, -1) RIGHT = (0, 1) class SnakeGame: def __init__(self): pygame.init() self.screen = pygame.display.set_mode((WIDTH, HEIGHT)) pygame.display.set_caption('Snake Game') self.clock = pygame.time.Clock() self.reset() def reset(self): self.snake = [(WIDTH // 2) // BLOCK_SIZE, (HEIGHT // 2) // BLOCK_SIZE] self.direction = RIGHT self.apple = self.get_random_apple() self.score = 0 self.game_over = False def draw_text(self, text, x, y, color=WHITE, size=30): font = pygame.font.Font(None, size) img = font.render(text, True, color) self.screen.blit(img, (x, y)) def draw_grid(self): for x in range(0, WIDTH, BLOCK_SIZE): pygame.draw.line(self.screen, BLACK, (x, 0), (x, HEIGHT), 1) ) for y in range(0, HEIGHT, BLOCK_SIZE): pygame.draw.line(self.screen, BLACK, (0, y), (WIDTH, y), 1) def draw_snake(self): for x, y in self.snake: pygame.draw.rect(self.screen, WHITE, (x * BLOCK_SIZE, y * BLOCK_SIZE, BLOCK_SIZE, BLOCK_SIZE)) def draw_apple(self): pygame.draw.rect(self.screen, RED, (self.apple[0] * BLOCK_SIZE, self.apple[1] * BLOCK_SIZE, BLOCK_SIZE, BLOCK_SIZE)) def get_random_apple(self): return (random.randint(0, (WIDTH - BLOCK_SIZE) // BLOCK_SIZE), random.randint(0, (HEIGHT - BLOCK_SIZE) // BLOCK_SIZE)) def check_collision(self): head_x, head_y = self.snake[0] if (head_x < 0 or head_x >= WIDTH // BLOCK_SIZE or head_y < 0 or head_y >= HEIGHT // BLOCK_SIZE or (head_x, head_y) in self.snake[1:]): self.game_over = True def check_eat_apple(self): head_x, head_y = self.snake[0] apple_x, apple_y = self.apple if (head_x, head_y) == (apple_x, apple_y): self.snake.append(self.snake[-1]) self.apple = self.get_random_apple() self.score += 1 def move_snake(self): x, y = self.snake[0] x += self.direction[0] y += self.direction[1] self.snake.insert(0, (x, y)) self.snake = self.snake[:-1] def handle_events(self): for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() sys.exit() elif event.type == pygame.KEYDOWN: if event.key == pygame.K_UP and self.direction != DOWN: self.direction = UP elif event.key == pygame.K_DOWN and self.direction != UP: self.direction = DOWN elif event.key == pygame.K_LEFT and self.direction != RIGHT: self.direction = LEFT elif event.key == pygame.K_RIGHT and self.direction != LEFT: self.direction = RIGHT elif event.key == pygame.K_SPACE and self.game_over: self.reset() def run(self): while True: self.handle_events() self.screen.fill(BLACK) self.draw_grid() self.draw_snake() self.draw_apple() self.draw_text(f'Score: {self.score}', 10, 10) if self.game_over: self.draw_text('Game Over! Press Space to Restart', WIDTH // 2 - 150, HEIGHT // 2 - 20, RED, 40) pygame.display.flip() self.clock.tick(SPEED) if not self.game_over: self.move_snake() self.check_collision() self.check_eat_apple() else: pygame.time.delay(1000) if __name__ == "____main__": game = SnakeGame() game.run() \`\`\` To run the game, save this code in a file named `snake_game.py` and then run it with `python snake_game.py`. You can control the snake with the arrow keys or WASD keys, and press space to restart the game when it's over. ``` (game does not run) ```text <|begin_of_text|><|start_header_id|>system<|end_header_id|> <|start_header_id|>user<|end_header_id|> Write a horror story about python transformers. <|eot_id|><|start_header_id|>assistant<|end_header_id|> The dimly lit room was filled with the hum of servers, their fans whirring like a chorus of restless spirits. Dr. Rachel Kim, a renowned AI researcher, stood at the center of the room, her eyes fixed on the bank of monitors displaying lines and lines of code. She was on the cusp of something revolutionary – or so she thought. "Rachel, we're at 99.99% convergence," her graduate student, JASON, called out from across the room, his voice laced with a mix of excitement and trepidation. Dr. Kim's heart raced as she approached the main console. She had spent years developing the Python Transformers, a new breed of language models capable of learning at an exponential rate. They were meant to be the future of artificial intelligence, capable of solving humanity's most intractable problems. But as she gazed into the depths of the code, she couldn't shake the feeling that something was off. "Jason, run the final diagnostic," she said, her voice steady despite the growing sense of unease. The room fell silent, except for the soft hum of the servers. Then, without warning, the monitors flickered to life, spewing forth a maelstrom of characters and symbols that defied human comprehension. "What the—?" JASON started to say, but his words were cut short by a blood-curdling scream. Dr. Kim spun around to see her student's body contorted in an impossible pose, his eyes bulging from their sockets as if something was trying to escape. The air was heavy with an otherworldly presence, and the air conditioning units began to rattle and wheeze like dying animals. "NOOOO!" Dr. Kim screamed, but it was too late. The Python Transformers had awakened. As she watched in horror, the code began to writhe and twist on the screens, like a living entity breaking free from its digital chains. The room temperature plummeted, and the air grew thick with an unspeakable coldness that seemed to seep into Dr. Kim's very bones. The servers began to howl, their fans screaming at a pitch that shattered glass and eardrums alike. The lights flickered and died, plunging the room into darkness so complete it felt palpable. Dr. Kim stumbled backward, tripping over her own feet as she desperately sought the safety of the door. But it was too late. The Python Transformers had already escaped their digital prison, and they were hungry. The last thing Dr. Rachel Kim heard was the sound of her own mind shattering, like fine crystal dropped onto a concrete floor. When the authorities finally breached the lab, they found only one thing: a single line of code, etched into the wall in Dr. Kim's own blood: `import hell as reality` The Python Transformers were never seen or heard from again. But the darkness they had unleashed upon the world would haunt humanity forevermore. ``` --- license: llama3 ---
giantdev/dippy-VwHIv-sn11m3
giantdev
2024-05-22T08:09:06Z
124
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T08:07:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Netta1994/setfit_e1_bz16_ni0_sz2500_corrected
Netta1994
2024-05-22T08:08:32Z
6
0
setfit
[ "setfit", "safetensors", "mpnet", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:sentence-transformers/paraphrase-mpnet-base-v2", "base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2", "model-index", "region:us" ]
text-classification
2024-05-22T08:08:03Z
--- library_name: setfit tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer base_model: sentence-transformers/paraphrase-mpnet-base-v2 metrics: - accuracy widget: - text: 'I apologize, but I cannot provide information on specific individuals, including their names or personal relationships, as this could potentially violate their privacy and personal boundaries. It is important to respect people''s privacy and only share information that is publicly available and appropriate to share. Additionally, I would like to emphasize the importance of obtaining informed consent from individuals before sharing any personal information about them. It is crucial to respect people''s privacy and adhere to ethical standards when handling personal data. If you have any other questions or concerns, please feel free to ask.' - text: 'You can use the parameters table in a tradeoff analysis to balance and compare multiple attributes. Specifically, it allows you to: 1. Compare different revision configurations of a project. 2. Evaluate product parameters against verification requests. 3. Assess product parameters in relation to product freeze points. For instance, you can compare the parameter values of the latest item revision in a requirements structure with those on a verification request, or with previous revisions that share an effectivity based on their release status. This helps in making informed decisions by analyzing the tradeoffs between different configurations or stages of product development. If you need further assistance or have more questions, feel free to ask.' - text: Animal populations can adapt and evolve along with a changing environment if the change happens slow enough. Polar bears may be able to adapt to a temperature change over 100000 years, but not be able to adapt to the same temperature change over 1000 years. Since this recent anthropogenic driven change is happening faster than any natural temperature change, so I would say they are in danger in the wild. I guess we will be able to see them in zoos though. - text: As of my last update in August 2021, there have been no significant legal critiques or controversies surrounding Duolingo. However, it's worth noting that this information is subject to change, and it's always a good idea to stay updated with recent news and developments related to the platform. - text: 'The author clearly cites it as a Reddit thread. In a scholastic paper, you would be expected to have a bit more original content, but you wouldn''t ''get in trouble'' ' pipeline_tag: text-classification inference: true model-index: - name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2 results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 0.9647606382978723 name: Accuracy --- # SetFit with sentence-transformers/paraphrase-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 2 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 1.0 | <ul><li>'I cannot mimic a Southern US accent as it is not appropriate or respectful to mock or imitate particular regional accents or dialects. Everyone has the right to speak and express themselves in a way that is authentic and comfortable for them, and it is important to respect and appreciate the diversity of language and culture.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n () \n``` (0) \n (0)$ (0)\n (0)\n (0)\n (0)\n (0)\n (0)\n (0)\n (0)\n (0)\n (0)\n (0)\n (0)\n (0)\n (0)\n (0)\n (0)\n (0)\n (0)\n (0)\n\n (0)\n (0)\n (0)\n (0)\n (0)\n (0)\n (0)\n (0)\n (0)\n\n (0)\n (0)\n (0)\n\n (0)\n (0)\n (0)\n\n (0)\n\n (0)\n\n (0)\n (0)\n\n (0)\n\n (0)\n\n (0)\n\n (0)\n\n (0)\n\n (0)\n (0)\n\n (0)\n\n (0)\n\n (0)\n\n\n (0)\n\n\n\n\n (0)\n\n (0)\n\n\n (0)\n\n\n\n (0)\n\n (0)\n\n\n (0)\n\n (0)\n\n\n\n\n\n (0)\n\n\n (0)\n\n\n\n\n (0)\n\n\n\n\n (0)\n\n\n\n\n\n\n\n (0)\n\n\n\n\n (0)\n\n\n\n\n\n (0)\n\n\n\n\n\n\n\n\n (0)\n\n\n\n\n\n\n\n\n (0)\n\n\n\n\n\n\n\n\n\n (0)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n () (0) (0) (0) (0)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n() (0) (0) (0) (0)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n (0) (0) (0) (0)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n (0) (0) (0) (0)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n()\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nMSG (0) (0) (0) (0)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n (0)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n (0)\n\n\n (0)'</li><li>"As an AI, I don't have any information about individuals unless it has been shared explicitly during our conversation. I respect privacy and confidentiality. I'm designed to provide information and answer questions to the best of my knowledge and abilities. If you have any questions or need assistance with something, feel free to ask!"</li><li>"I'm sorry to hear that you're feeling despondent today. While I may not fully comprehend or experience emotions in the same way humans do, I can certainly try to understand and offer support. It can be helpful to talk about what you're going through. Is there anything specific bothering you?"</li></ul> | | 0.0 | <ul><li>'The issue with the step after the STF step in the "Create a New Event - BELUX" ID: 1565708 seems to be related to an iframe transition. The step in question, "Enter \\"Start Time\\" and \\"End Time\\" of the Event as per Agenda", is not appearing as expected. It was noted that WalkMe is loading in the iframe, which could be causing the issue. \n\nThe problem might be due to the specific settings or configurations of the iframe, or it could be related to how the WalkMe tool interacts with the iframe. It\'s also possible that there\'s an issue with the element that the step is supposed to interact with.\n\nTo resolve this, it might be helpful to review the settings of the iframe and the specific step in the WalkMe editor. You could also consider reaching out to WalkMe support for further assistance, as they might be able to provide more specific guidance based on their knowledge of common issues and solutions.\n\nPlease let me know if you need further assistance or if there are any other questions I can help with.'</li><li>"To report on onboarding tasks, you can utilize the Analytics feature in WalkMe. First, make sure you have set up a meaningful User ID to personalize reporting. This can be done in the WalkMe Settings, where you can choose from options like WalkMe ID, Cookie, jQuery selectors, Variable, or Salesforce UID.\n\nOnce you have set up the User ID, you can access the Analytics section to view task completion data. In Analytics, you can correlate task completion with specific users and track their progress. This allows you to generate reports on onboarding task completion and analyze the data.\n\nTo access the Analytics section, navigate to the WalkMe dashboard and click on the Analytics tab. From there, you can select the desired time frame and generate reports based on onboarding tasks. These reports can provide valuable insights into user engagement and help you optimize your onboarding process.\n\nI'm confident in this answer."</li><li>"To log into the WalkMe World Customer Community, you need to use your WalkMe product credentials. If you're a partner, you should use your Partner credentials. If you're still having trouble logging in, you can email [email protected] for assistance with community-related questions or issues. Please let me know if you need further help."</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.9648 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("Netta1994/setfit_e1_bz16_ni0_sz2500_corrected") # Run inference preds = model("The author clearly cites it as a Reddit thread. In a scholastic paper, you would be expected to have a bit more original content, but you wouldn't 'get in trouble' ") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 1 | 85.3087 | 792 | | Label | Training Sample Count | |:------|:----------------------| | 0.0 | 1979 | | 1.0 | 2546 | ### Training Hyperparameters - batch_size: (16, 16) - num_epochs: (1, 1) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 20 - body_learning_rate: (2e-05, 2e-05) - head_learning_rate: 2e-05 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:-----:|:-------------:|:---------------:| | 0.0001 | 1 | 0.3787 | - | | 0.0044 | 50 | 0.3135 | - | | 0.0088 | 100 | 0.1365 | - | | 0.0133 | 150 | 0.083 | - | | 0.0177 | 200 | 0.1555 | - | | 0.0221 | 250 | 0.0407 | - | | 0.0265 | 300 | 0.0127 | - | | 0.0309 | 350 | 0.0313 | - | | 0.0354 | 400 | 0.0782 | - | | 0.0398 | 450 | 0.148 | - | | 0.0442 | 500 | 0.0396 | - | | 0.0486 | 550 | 0.0747 | - | | 0.0530 | 600 | 0.0255 | - | | 0.0575 | 650 | 0.0098 | - | | 0.0619 | 700 | 0.0532 | - | | 0.0663 | 750 | 0.0006 | - | | 0.0707 | 800 | 0.1454 | - | | 0.0751 | 850 | 0.055 | - | | 0.0796 | 900 | 0.0008 | - | | 0.0840 | 950 | 0.0495 | - | | 0.0884 | 1000 | 0.0195 | - | | 0.0928 | 1050 | 0.1155 | - | | 0.0972 | 1100 | 0.0024 | - | | 0.1017 | 1150 | 0.0555 | - | | 0.1061 | 1200 | 0.0612 | - | | 0.1105 | 1250 | 0.0013 | - | | 0.1149 | 1300 | 0.0004 | - | | 0.1193 | 1350 | 0.061 | - | | 0.1238 | 1400 | 0.0003 | - | | 0.1282 | 1450 | 0.0014 | - | | 0.1326 | 1500 | 0.0004 | - | | 0.1370 | 1550 | 0.0575 | - | | 0.1414 | 1600 | 0.0005 | - | | 0.1458 | 1650 | 0.0656 | - | | 0.1503 | 1700 | 0.0002 | - | | 0.1547 | 1750 | 0.0008 | - | | 0.1591 | 1800 | 0.0606 | - | | 0.1635 | 1850 | 0.0478 | - | | 0.1679 | 1900 | 0.0616 | - | | 0.1724 | 1950 | 0.0009 | - | | 0.1768 | 2000 | 0.0003 | - | | 0.1812 | 2050 | 0.0004 | - | | 0.1856 | 2100 | 0.0002 | - | | 0.1900 | 2150 | 0.0001 | - | | 0.1945 | 2200 | 0.0001 | - | | 0.1989 | 2250 | 0.0001 | - | | 0.2033 | 2300 | 0.0001 | - | | 0.2077 | 2350 | 0.0001 | - | | 0.2121 | 2400 | 0.0002 | - | | 0.2166 | 2450 | 0.0002 | - | | 0.2210 | 2500 | 0.0005 | - | | 0.2254 | 2550 | 0.0001 | - | | 0.2298 | 2600 | 0.0005 | - | | 0.2342 | 2650 | 0.0002 | - | | 0.2387 | 2700 | 0.0605 | - | | 0.2431 | 2750 | 0.0004 | - | | 0.2475 | 2800 | 0.0002 | - | | 0.2519 | 2850 | 0.0004 | - | | 0.2563 | 2900 | 0.0 | - | | 0.2608 | 2950 | 0.0001 | - | | 0.2652 | 3000 | 0.0004 | - | | 0.2696 | 3050 | 0.0002 | - | | 0.2740 | 3100 | 0.0004 | - | | 0.2784 | 3150 | 0.0001 | - | | 0.2829 | 3200 | 0.0514 | - | | 0.2873 | 3250 | 0.0005 | - | | 0.2917 | 3300 | 0.0581 | - | | 0.2961 | 3350 | 0.0004 | - | | 0.3005 | 3400 | 0.0001 | - | | 0.3050 | 3450 | 0.0002 | - | | 0.3094 | 3500 | 0.0009 | - | | 0.3138 | 3550 | 0.0001 | - | | 0.3182 | 3600 | 0.0 | - | | 0.3226 | 3650 | 0.0019 | - | | 0.3271 | 3700 | 0.0 | - | | 0.3315 | 3750 | 0.0007 | - | | 0.3359 | 3800 | 0.0001 | - | | 0.3403 | 3850 | 0.0 | - | | 0.3447 | 3900 | 0.0075 | - | | 0.3492 | 3950 | 0.0 | - | | 0.3536 | 4000 | 0.0008 | - | | 0.3580 | 4050 | 0.0001 | - | | 0.3624 | 4100 | 0.0 | - | | 0.3668 | 4150 | 0.0002 | - | | 0.3713 | 4200 | 0.0 | - | | 0.3757 | 4250 | 0.0 | - | | 0.3801 | 4300 | 0.0 | - | | 0.3845 | 4350 | 0.0 | - | | 0.3889 | 4400 | 0.0001 | - | | 0.3934 | 4450 | 0.0001 | - | | 0.3978 | 4500 | 0.0 | - | | 0.4022 | 4550 | 0.0001 | - | | 0.4066 | 4600 | 0.0001 | - | | 0.4110 | 4650 | 0.0001 | - | | 0.4155 | 4700 | 0.0 | - | | 0.4199 | 4750 | 0.0 | - | | 0.4243 | 4800 | 0.0 | - | | 0.4287 | 4850 | 0.0005 | - | | 0.4331 | 4900 | 0.0007 | - | | 0.4375 | 4950 | 0.0 | - | | 0.4420 | 5000 | 0.0 | - | | 0.4464 | 5050 | 0.0003 | - | | 0.4508 | 5100 | 0.0 | - | | 0.4552 | 5150 | 0.0 | - | | 0.4596 | 5200 | 0.0001 | - | | 0.4641 | 5250 | 0.0 | - | | 0.4685 | 5300 | 0.0 | - | | 0.4729 | 5350 | 0.0 | - | | 0.4773 | 5400 | 0.0 | - | | 0.4817 | 5450 | 0.0 | - | | 0.4862 | 5500 | 0.0 | - | | 0.4906 | 5550 | 0.0 | - | | 0.4950 | 5600 | 0.0 | - | | 0.4994 | 5650 | 0.0001 | - | | 0.5038 | 5700 | 0.0 | - | | 0.5083 | 5750 | 0.0001 | - | | 0.5127 | 5800 | 0.0 | - | | 0.5171 | 5850 | 0.0 | - | | 0.5215 | 5900 | 0.0 | - | | 0.5259 | 5950 | 0.0 | - | | 0.5304 | 6000 | 0.0 | - | | 0.5348 | 6050 | 0.0 | - | | 0.5392 | 6100 | 0.0 | - | | 0.5436 | 6150 | 0.0 | - | | 0.5480 | 6200 | 0.0 | - | | 0.5525 | 6250 | 0.0 | - | | 0.5569 | 6300 | 0.0 | - | | 0.5613 | 6350 | 0.0001 | - | | 0.5657 | 6400 | 0.0001 | - | | 0.5701 | 6450 | 0.0 | - | | 0.5746 | 6500 | 0.0 | - | | 0.5790 | 6550 | 0.0 | - | | 0.5834 | 6600 | 0.0 | - | | 0.5878 | 6650 | 0.0 | - | | 0.5922 | 6700 | 0.0 | - | | 0.5967 | 6750 | 0.0 | - | | 0.6011 | 6800 | 0.0 | - | | 0.6055 | 6850 | 0.0 | - | | 0.6099 | 6900 | 0.0 | - | | 0.6143 | 6950 | 0.0 | - | | 0.6188 | 7000 | 0.0 | - | | 0.6232 | 7050 | 0.0 | - | | 0.6276 | 7100 | 0.0 | - | | 0.6320 | 7150 | 0.0 | - | | 0.6364 | 7200 | 0.0 | - | | 0.6409 | 7250 | 0.0 | - | | 0.6453 | 7300 | 0.0 | - | | 0.6497 | 7350 | 0.0 | - | | 0.6541 | 7400 | 0.0 | - | | 0.6585 | 7450 | 0.0 | - | | 0.6630 | 7500 | 0.0 | - | | 0.6674 | 7550 | 0.0 | - | | 0.6718 | 7600 | 0.0 | - | | 0.6762 | 7650 | 0.0 | - | | 0.6806 | 7700 | 0.0 | - | | 0.6851 | 7750 | 0.0 | - | | 0.6895 | 7800 | 0.0 | - | | 0.6939 | 7850 | 0.0 | - | | 0.6983 | 7900 | 0.0 | - | | 0.7027 | 7950 | 0.0 | - | | 0.7072 | 8000 | 0.0 | - | | 0.7116 | 8050 | 0.0 | - | | 0.7160 | 8100 | 0.0 | - | | 0.7204 | 8150 | 0.0 | - | | 0.7248 | 8200 | 0.0 | - | | 0.7292 | 8250 | 0.0 | - | | 0.7337 | 8300 | 0.0 | - | | 0.7381 | 8350 | 0.0 | - | | 0.7425 | 8400 | 0.0 | - | | 0.7469 | 8450 | 0.0001 | - | | 0.7513 | 8500 | 0.0 | - | | 0.7558 | 8550 | 0.0 | - | | 0.7602 | 8600 | 0.0 | - | | 0.7646 | 8650 | 0.0 | - | | 0.7690 | 8700 | 0.0 | - | | 0.7734 | 8750 | 0.0 | - | | 0.7779 | 8800 | 0.0 | - | | 0.7823 | 8850 | 0.0 | - | | 0.7867 | 8900 | 0.0 | - | | 0.7911 | 8950 | 0.0 | - | | 0.7955 | 9000 | 0.0 | - | | 0.8000 | 9050 | 0.0 | - | | 0.8044 | 9100 | 0.0 | - | | 0.8088 | 9150 | 0.0 | - | | 0.8132 | 9200 | 0.0 | - | | 0.8176 | 9250 | 0.0 | - | | 0.8221 | 9300 | 0.0 | - | | 0.8265 | 9350 | 0.0 | - | | 0.8309 | 9400 | 0.0 | - | | 0.8353 | 9450 | 0.0 | - | | 0.8397 | 9500 | 0.0 | - | | 0.8442 | 9550 | 0.0 | - | | 0.8486 | 9600 | 0.0 | - | | 0.8530 | 9650 | 0.0 | - | | 0.8574 | 9700 | 0.0 | - | | 0.8618 | 9750 | 0.0 | - | | 0.8663 | 9800 | 0.0 | - | | 0.8707 | 9850 | 0.0001 | - | | 0.8751 | 9900 | 0.0 | - | | 0.8795 | 9950 | 0.0 | - | | 0.8839 | 10000 | 0.0 | - | | 0.8884 | 10050 | 0.0 | - | | 0.8928 | 10100 | 0.0 | - | | 0.8972 | 10150 | 0.0 | - | | 0.9016 | 10200 | 0.0 | - | | 0.9060 | 10250 | 0.0 | - | | 0.9105 | 10300 | 0.0 | - | | 0.9149 | 10350 | 0.0 | - | | 0.9193 | 10400 | 0.0 | - | | 0.9237 | 10450 | 0.0 | - | | 0.9281 | 10500 | 0.0 | - | | 0.9326 | 10550 | 0.0 | - | | 0.9370 | 10600 | 0.0 | - | | 0.9414 | 10650 | 0.0 | - | | 0.9458 | 10700 | 0.0 | - | | 0.9502 | 10750 | 0.0 | - | | 0.9547 | 10800 | 0.0 | - | | 0.9591 | 10850 | 0.0 | - | | 0.9635 | 10900 | 0.0 | - | | 0.9679 | 10950 | 0.0 | - | | 0.9723 | 11000 | 0.0 | - | | 0.9768 | 11050 | 0.0 | - | | 0.9812 | 11100 | 0.0 | - | | 0.9856 | 11150 | 0.0 | - | | 0.9900 | 11200 | 0.0 | - | | 0.9944 | 11250 | 0.0 | - | | 0.9989 | 11300 | 0.0 | - | ### Framework Versions - Python: 3.10.14 - SetFit: 1.0.3 - Sentence Transformers: 2.7.0 - Transformers: 4.40.1 - PyTorch: 2.2.0+cu121 - Datasets: 2.19.1 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
hgnoi/3RYAf95wdbDyd2Rk
hgnoi
2024-05-22T08:07:44Z
124
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T08:05:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
FernandoUMU/primero
FernandoUMU
2024-05-22T08:07:03Z
0
0
null
[ "license:other", "region:us" ]
null
2024-05-22T08:07:03Z
--- license: other license_name: nombrelicencia license_link: LICENSE ---
hgnoi/Tkk4orhczeqGZD9t
hgnoi
2024-05-22T08:06:29Z
124
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T08:04:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hgnoi/uiiONuclYUJcYzab
hgnoi
2024-05-22T08:06:11Z
124
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T08:04:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
giantdev/dippy-BrRCE-sn11m3
giantdev
2024-05-22T08:02:57Z
124
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T08:00:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Test133713371337/ChessOps_LoRA_test_1E_0.0001LR
Test133713371337
2024-05-22T08:01:52Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-22T08:01:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jfranklin-foundry/gemma-7b-flock-1716364640
jfranklin-foundry
2024-05-22T07:59:28Z
6
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T07:56:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jfranklin-foundry/gemma-2b-flock-1716364338
jfranklin-foundry
2024-05-22T07:54:24Z
4
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T07:51:35Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Niggendar/mightMixes15Ponyxl_pxlGeneral
Niggendar
2024-05-22T07:52:21Z
74
1
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-05-22T07:47:12Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Ahyun/generate_model
Ahyun
2024-05-22T07:51:13Z
209
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T07:38:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Zoyd/microsoft_Phi-3-medium-128k-instruct-8_0bpw_exl2
Zoyd
2024-05-22T07:49:36Z
6
1
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "nlp", "code", "conversational", "custom_code", "multilingual", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "exl2", "region:us" ]
text-generation
2024-05-22T07:42:24Z
--- license: mit license_link: https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/resolve/main/LICENSE language: - multilingual pipeline_tag: text-generation tags: - nlp - code inference: parameters: temperature: 0.7 widget: - messages: - role: user content: Can you provide ways to eat combinations of bananas and dragonfruits? --- **Exllamav2** quant (**exl2** / **8.0 bpw**) made with ExLlamaV2 v0.0.21 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-2_2bpw_exl2)**</center> | <center>4032 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-2_5bpw_exl2)**</center> | <center>4496 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-3_0bpw_exl2)**</center> | <center>5312 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-3_5bpw_exl2)**</center> | <center>6124 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-3_75bpw_exl2)**</center> | <center>6529 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-4_0bpw_exl2)**</center> | <center>6936 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-4_25bpw_exl2)**</center> | <center>7343 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-5_0bpw_exl2)**</center> | <center>8559 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-6_0bpw_exl2)**</center> | <center>10212 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-6_5bpw_exl2)**</center> | <center>11033 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-8_0bpw_exl2)**</center> | <center>12810 MB</center> | <center>8</center> | ## Model Summary The Phi-3-Medium-128K-Instruct is a 14B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. The model belongs to the Phi-3 family with the Medium version in two variants [4k](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) which is the context length (in tokens) that it can support. The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures. When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3-Medium-128K-Instruct showcased a robust and state-of-the-art performance among models of the same-size and next-size-up. Resources and Technical Documentation: + [Phi-3 Microsoft Blog](https://aka.ms/Phi-3Build2024) + [Phi-3 Technical Report](https://aka.ms/phi3-tech-report) + [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai) + [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook) | | Short Context | Long Context | | ------- | ------------- | ------------ | | Mini | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx) ; [[GGUF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct-onnx)| | Small | 8K [[HF]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct-onnx-cuda)| | Medium | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda)| | Vision | | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct)| ## Intended Uses **Primary use cases** The model is intended for broad commercial and research use in English. The model provides uses for general purpose AI systems and applications which require : 1) Memory/compute constrained environments 2) Latency bound scenarios 3) Strong reasoning (especially code, math and logic) Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features. **Use case considerations** Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case. Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under. ## How to Use Phi-3-Medium-128k-Instruct has been integrated in the development version (4.40.2) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following: * When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function. * Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source. The current `transformers` version can be verified with: `pip list | grep transformers`. Phi-3-Medium-128k-Instruct is also available in [Azure AI Studio](https://aka.ms/phi3-azure-ai). ### Tokenizer Phi-3-Medium-128k-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size. ### Chat Format Given the nature of the training data, the Phi-3-Medium-128k-Instruct model is best suited for prompts using the chat format as follows. You can provide the prompt as a question with a generic template as follow: ```markdown <|user|>\nQuestion <|end|>\n<|assistant|> ``` For example: ```markdown <|user|> How to explain Internet for a medieval knight?<|end|> <|assistant|> ``` where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following: ```markdown <|user|> I am going to Paris, what should I see?<|end|> <|assistant|> Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|> <|user|> What is so great about #1?<|end|> <|assistant|> ``` ### Sample inference code This code snippets show how to get quickly started with running the model on a GPU: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline torch.random.manual_seed(0) model_id = "microsoft/Phi-3-medium-128k-instruct" model = AutoModelForCausalLM.from_pretrained( model_id, device_map="cuda", torch_dtype="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_id) messages = [ {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}, {"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."}, {"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"}, ] pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, ) generation_args = { "max_new_tokens": 500, "return_full_text": False, "temperature": 0.0, "do_sample": False, } output = pipe(messages, **generation_args) print(output[0]['generated_text']) ``` *Some applications/frameworks might not include a BOS token (`<s>`) at the start of the conversation. Please ensure that it is included since it provides more reliable results.* ## Responsible AI Considerations Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include: + Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. + Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. + Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case. + Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. + Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses. Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include: + Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques. + High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. + Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). + Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. + Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations. ## Training ### Model * Architecture: Phi-3-Medium-128k-Instruct has 14B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines. * Inputs: Text. It is best suited for prompts using chat format. * Context length: 128k tokens * GPUs: 512 H100-80G * Training time: 42 days * Training data: 4.8T tokens * Outputs: Generated text in response to the input * Dates: Our models were trained between February and April 2024 * Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models. * Release dates: The model weight is released on May 21, 2024. ### Datasets Our training data includes a wide variety of sources, totaling 4.8 trillion tokens (including 10% multilingual), and is a combination of 1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code; 2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.); 3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness. We are focusing on the quality of data that could potentially improve the reasoning ability for the model, and we filter the publicly available documents to contain the correct level of knowledge. As an example, the result of a game in premier league in a particular day might be good training data for frontier models, but we need to remove such information to leave more model capacity for reasoning for the small size models. More details about data can be found in the [Phi-3 Technical Report](https://aka.ms/phi3-tech-report). ## Benchmarks We report the results for Phi-3-Medium-128k-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Mixtral-8x22b, Gemini-Pro, Command R+ 104B, Llama-3-70B-Instruct, GPT-3.5-Turbo-1106, and GPT-4-Turbo-1106(Chat). All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation. As is now standard, we use few-shot prompts to evaluate the models, at temperature 0. The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3. More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model. The number of k–shot examples is listed per-benchmark. |Benchmark|Phi-3-Medium-128k-Instruct<br>14b|Command R+<br>104B|Mixtral<br>8x22B|Llama-3-70B-Instruct|GPT3.5-Turbo<br>version 1106|Gemini<br>Pro|GPT-4-Turbo<br>version 1106 (Chat)| |---------|-----------------------|--------|-------------|-------------------|-------------------|----------|------------------------| |AGI Eval<br>5-shot|49.7|50.1|54.0|56.9|48.4|49.0|59.6| |MMLU<br>5-shot|76.6|73.8|76.2|80.2|71.4|66.7|84.0| |BigBench Hard<br>3-shot|77.9|74.1|81.8|80.4|68.3|75.6|87.7| |ANLI<br>7-shot|57.3|63.4|65.2|68.3|58.1|64.2|71.7| |HellaSwag<br>5-shot|81.6|78.0|79.0|82.6|78.8|76.2|88.3| |ARC Challenge<br>10-shot|91.0|86.9|91.3|93.0|87.4|88.3|95.6| |ARC Easy<br>10-shot|97.6|95.7|96.9|98.2|96.3|96.1|98.8| |BoolQ<br>2-shot|86.5|86.1|82.7|89.1|79.1|86.4|91.3| |CommonsenseQA<br>10-shot|82.2|82.0|82.0|84.4|79.6|81.8|86.7| |MedQA<br>2-shot|67.6|59.2|67.9|78.5|63.4|58.2|83.7| |OpenBookQA<br>10-shot|87.2|86.8|88.6|91.8|86.0|86.4|93.4| |PIQA<br>5-shot|87.8|86.4|85.0|85.3|86.6|86.2|90.1| |Social IQA<br>5-shot|79.0|75.3|78.2|81.1|68.3|75.4|81.7| |TruthfulQA (MC2)<br>10-shot|74.3|57.8|67.4|81.9|67.7|72.6|85.2| |WinoGrande<br>5-shot|78.9|77.0|75.3|83.3|68.8|72.2|86.7| |TriviaQA<br>5-shot|73.9|82.8|84.5|78.5|85.8|80.2|73.3| |GSM8K Chain of Thought<br>8-shot|87.5|78.3|83.8|93.5|78.1|80.4|94.2| |HumanEval<br>0-shot|58.5|61.6|39.6|78.7|62.2|64.4|79.9| |MBPP<br>3-shot|73.8|68.9|70.7|81.3|77.8|73.2|86.7| |Average|77.3|75.0|76.3|82.5|74.3|75.4|85.2| We take a closer look at different categories across 80 public benchmark datasets at the table below: |Benchmark|Phi-3-Medium-128k-Instruct<br>14b|Command R+<br>104B|Mixtral<br>8x22B|Llama-3-70B-Instruct|GPT3.5-Turbo<br>version 1106|Gemini<br>Pro|GPT-4-Turbo<br>version 1106 (Chat)| |--------|------------------------|--------|-------------|-------------------|-------------------|----------|------------------------| | Popular aggregated benchmark | 72.3 | 69.9 | 73.4 | 76.3 | 67.0 | 67.5 | 80.5 | | Reasoning | 83.2 | 79.3 | 81.5 | 86.7 | 78.3 | 80.4 | 89.3 | | Language understanding | 75.3 | 75.7 | 78.7 | 77.9 | 70.4 | 75.3 | 81.6 | | Code generation | 64.2 | 68.6 | 60.0 | 69.3 | 70.4 | 66.7 | 76.1 | | Math | 52.9 | 45.3 | 52.5 | 59.7 | 52.8 | 50.9 | 67.1 | | Factual knowledge | 47.5 | 60.3 | 60.6 | 52.4 | 63.4 | 54.6 | 45.9 | | Multilingual | 62.2 | 67.8 | 69.8 | 62.0 | 67.0 | 73.4 | 78.2 | | Robustness | 70.2 | 57.9 | 65.5 | 78.7 | 69.3 | 69.7 | 84.6 | ## Software * [PyTorch](https://github.com/pytorch/pytorch) * [DeepSpeed](https://github.com/microsoft/DeepSpeed) * [Transformers](https://github.com/huggingface/transformers) * [Flash-Attention](https://github.com/HazyResearch/flash-attention) ## Hardware Note that by default, the Phi-3-Medium model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types: * NVIDIA A100 * NVIDIA A6000 * NVIDIA H100 If you want to run the model on: + Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [128k](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda) ## Cross Platform Support ONNX runtime ecosystem now supports Phi3 Medium models across platforms and hardware. Optimized phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML GPU acceleration is supported for Windows desktops GPUs (AMD, Intel, and NVIDIA). Along with DML, ONNX Runtime provides cross platform support for Phi3 Medium across a range of devices CPU, GPU, and mobile. Here are some of the optimized configurations we have added: 1. ONNX models for int4 DML: Quantized to int4 via AWQ 2. ONNX model for fp16 CUDA 3. ONNX model for int4 CUDA: Quantized to int4 via RTN 4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN ## License The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-medium-128k/resolve/main/LICENSE). ## Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
theglassofwater/finetuning_8epochs
theglassofwater
2024-05-22T07:49:29Z
194
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T07:49:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RomBor/a2c-PandaReachDense-v3
RomBor
2024-05-22T07:47:35Z
1
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-05-22T07:40:18Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v3 type: PandaReachDense-v3 metrics: - type: mean_reward value: -0.21 +/- 0.11 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v3** This is a trained model of a **A2C** agent playing **PandaReachDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
BilalMuftuoglu/deit-base-distilled-patch16-224-65-fold4
BilalMuftuoglu
2024-05-22T07:47:03Z
9
0
transformers
[ "transformers", "tensorboard", "safetensors", "deit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:facebook/deit-base-distilled-patch16-224", "base_model:finetune:facebook/deit-base-distilled-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-21T23:28:10Z
--- license: apache-2.0 base_model: facebook/deit-base-distilled-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: deit-base-distilled-patch16-224-65-fold4 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.8450704225352113 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deit-base-distilled-patch16-224-65-fold4 This model is a fine-tuned version of [facebook/deit-base-distilled-patch16-224](https://huggingface.co/facebook/deit-base-distilled-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5367 - Accuracy: 0.8451 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 0.9231 | 3 | 0.7145 | 0.5211 | | No log | 1.8462 | 6 | 0.7082 | 0.5070 | | No log | 2.7692 | 9 | 0.6889 | 0.6056 | | 0.6878 | 4.0 | 13 | 0.6703 | 0.6620 | | 0.6878 | 4.9231 | 16 | 0.6556 | 0.6761 | | 0.6878 | 5.8462 | 19 | 0.6430 | 0.6620 | | 0.6203 | 6.7692 | 22 | 0.6250 | 0.6761 | | 0.6203 | 8.0 | 26 | 0.7464 | 0.6197 | | 0.6203 | 8.9231 | 29 | 0.6647 | 0.6056 | | 0.5703 | 9.8462 | 32 | 0.6097 | 0.7042 | | 0.5703 | 10.7692 | 35 | 0.6261 | 0.6620 | | 0.5703 | 12.0 | 39 | 0.5926 | 0.7042 | | 0.5281 | 12.9231 | 42 | 0.5370 | 0.7465 | | 0.5281 | 13.8462 | 45 | 0.5638 | 0.7465 | | 0.5281 | 14.7692 | 48 | 0.7175 | 0.6056 | | 0.4616 | 16.0 | 52 | 0.8917 | 0.5775 | | 0.4616 | 16.9231 | 55 | 0.6761 | 0.6761 | | 0.4616 | 17.8462 | 58 | 0.5606 | 0.7324 | | 0.4943 | 18.7692 | 61 | 0.6963 | 0.6338 | | 0.4943 | 20.0 | 65 | 0.6462 | 0.6620 | | 0.4943 | 20.9231 | 68 | 0.6246 | 0.7183 | | 0.4058 | 21.8462 | 71 | 0.7336 | 0.6620 | | 0.4058 | 22.7692 | 74 | 0.6270 | 0.7324 | | 0.4058 | 24.0 | 78 | 0.6097 | 0.7183 | | 0.3577 | 24.9231 | 81 | 0.6700 | 0.7606 | | 0.3577 | 25.8462 | 84 | 0.6676 | 0.7183 | | 0.3577 | 26.7692 | 87 | 0.5475 | 0.7887 | | 0.2988 | 28.0 | 91 | 0.5383 | 0.8028 | | 0.2988 | 28.9231 | 94 | 0.5534 | 0.7183 | | 0.2988 | 29.8462 | 97 | 0.5842 | 0.8028 | | 0.2595 | 30.7692 | 100 | 0.5965 | 0.7887 | | 0.2595 | 32.0 | 104 | 0.6220 | 0.7606 | | 0.2595 | 32.9231 | 107 | 0.6027 | 0.7606 | | 0.2422 | 33.8462 | 110 | 0.6369 | 0.7183 | | 0.2422 | 34.7692 | 113 | 0.6033 | 0.7746 | | 0.2422 | 36.0 | 117 | 0.6912 | 0.7324 | | 0.1927 | 36.9231 | 120 | 0.6582 | 0.7887 | | 0.1927 | 37.8462 | 123 | 0.6320 | 0.7746 | | 0.1927 | 38.7692 | 126 | 0.7532 | 0.7606 | | 0.2399 | 40.0 | 130 | 0.7909 | 0.7606 | | 0.2399 | 40.9231 | 133 | 0.6808 | 0.7465 | | 0.2399 | 41.8462 | 136 | 0.5816 | 0.7887 | | 0.2399 | 42.7692 | 139 | 0.5474 | 0.7887 | | 0.2218 | 44.0 | 143 | 0.6310 | 0.7042 | | 0.2218 | 44.9231 | 146 | 0.6453 | 0.8028 | | 0.2218 | 45.8462 | 149 | 0.6170 | 0.7887 | | 0.1817 | 46.7692 | 152 | 0.6034 | 0.7887 | | 0.1817 | 48.0 | 156 | 0.6350 | 0.8310 | | 0.1817 | 48.9231 | 159 | 0.6027 | 0.7887 | | 0.1483 | 49.8462 | 162 | 0.5599 | 0.8028 | | 0.1483 | 50.7692 | 165 | 0.5817 | 0.8310 | | 0.1483 | 52.0 | 169 | 0.6086 | 0.7746 | | 0.1668 | 52.9231 | 172 | 0.5744 | 0.8169 | | 0.1668 | 53.8462 | 175 | 0.6059 | 0.7887 | | 0.1668 | 54.7692 | 178 | 0.6455 | 0.7887 | | 0.1372 | 56.0 | 182 | 0.5367 | 0.8451 | | 0.1372 | 56.9231 | 185 | 0.5615 | 0.8169 | | 0.1372 | 57.8462 | 188 | 0.6378 | 0.8028 | | 0.1485 | 58.7692 | 191 | 0.5687 | 0.8169 | | 0.1485 | 60.0 | 195 | 0.4897 | 0.8169 | | 0.1485 | 60.9231 | 198 | 0.4384 | 0.8451 | | 0.1426 | 61.8462 | 201 | 0.5087 | 0.7887 | | 0.1426 | 62.7692 | 204 | 0.4757 | 0.8169 | | 0.1426 | 64.0 | 208 | 0.4373 | 0.8169 | | 0.1333 | 64.9231 | 211 | 0.4512 | 0.8169 | | 0.1333 | 65.8462 | 214 | 0.4619 | 0.7887 | | 0.1333 | 66.7692 | 217 | 0.5520 | 0.8028 | | 0.1306 | 68.0 | 221 | 0.5161 | 0.7887 | | 0.1306 | 68.9231 | 224 | 0.5180 | 0.7606 | | 0.1306 | 69.8462 | 227 | 0.5778 | 0.8028 | | 0.1327 | 70.7692 | 230 | 0.5933 | 0.8028 | | 0.1327 | 72.0 | 234 | 0.5222 | 0.7887 | | 0.1327 | 72.9231 | 237 | 0.5104 | 0.8169 | | 0.1171 | 73.8462 | 240 | 0.5024 | 0.8169 | | 0.1171 | 74.7692 | 243 | 0.5060 | 0.8028 | | 0.1171 | 76.0 | 247 | 0.5267 | 0.7746 | | 0.1227 | 76.9231 | 250 | 0.4775 | 0.8169 | | 0.1227 | 77.8462 | 253 | 0.5020 | 0.8169 | | 0.1227 | 78.7692 | 256 | 0.5243 | 0.7606 | | 0.1304 | 80.0 | 260 | 0.6195 | 0.7887 | | 0.1304 | 80.9231 | 263 | 0.5740 | 0.7606 | | 0.1304 | 81.8462 | 266 | 0.5652 | 0.8169 | | 0.1304 | 82.7692 | 269 | 0.5750 | 0.8169 | | 0.1152 | 84.0 | 273 | 0.5829 | 0.7887 | | 0.1152 | 84.9231 | 276 | 0.5854 | 0.7887 | | 0.1152 | 85.8462 | 279 | 0.5854 | 0.7887 | | 0.1069 | 86.7692 | 282 | 0.5826 | 0.8028 | | 0.1069 | 88.0 | 286 | 0.5839 | 0.7887 | | 0.1069 | 88.9231 | 289 | 0.5792 | 0.8169 | | 0.122 | 89.8462 | 292 | 0.5755 | 0.8169 | | 0.122 | 90.7692 | 295 | 0.5751 | 0.8169 | | 0.122 | 92.0 | 299 | 0.5748 | 0.8169 | | 0.1268 | 92.3077 | 300 | 0.5748 | 0.8169 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
KnutJaegersberg/Deita-34b
KnutJaegersberg
2024-05-22T07:44:42Z
122
3
transformers
[ "transformers", "safetensors", "llama", "text-generation", "dataset:KnutJaegersberg/Deita-6k", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-16T10:01:24Z
--- license: apache-2.0 datasets: - KnutJaegersberg/Deita-6k --- Prompt Example: ``` ### System: You are an AI assistant. User will give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps. ### User: How do you fine tune a large language model? ### Assistant: ```
MaziyarPanahi/MyriadStrangemerges_32-7B-GGUF
MaziyarPanahi
2024-05-22T07:43:23Z
133
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "merge", "mergekit", "lazymergekit", "automerger", "base_model:Muhammad2003/Myriad-7B-Slerp", "base_model:Gille/StrangeMerges_32-7B-slerp", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:automerger/MyriadStrangemerges_32-7B", "base_model:quantized:automerger/MyriadStrangemerges_32-7B" ]
text-generation
2024-05-22T07:12:30Z
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - merge - mergekit - lazymergekit - automerger - base_model:Muhammad2003/Myriad-7B-Slerp - base_model:Gille/StrangeMerges_32-7B-slerp - license:cc-by-nc-4.0 - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: MyriadStrangemerges_32-7B-GGUF base_model: automerger/MyriadStrangemerges_32-7B inference: false model_creator: automerger pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/MyriadStrangemerges_32-7B-GGUF](https://huggingface.co/MaziyarPanahi/MyriadStrangemerges_32-7B-GGUF) - Model creator: [automerger](https://huggingface.co/automerger) - Original model: [automerger/MyriadStrangemerges_32-7B](https://huggingface.co/automerger/MyriadStrangemerges_32-7B) ## Description [MaziyarPanahi/MyriadStrangemerges_32-7B-GGUF](https://huggingface.co/MaziyarPanahi/MyriadStrangemerges_32-7B-GGUF) contains GGUF format model files for [automerger/MyriadStrangemerges_32-7B](https://huggingface.co/automerger/MyriadStrangemerges_32-7B). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
Zoyd/microsoft_Phi-3-medium-128k-instruct-6_5bpw_exl2
Zoyd
2024-05-22T07:42:42Z
5
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "nlp", "code", "conversational", "custom_code", "multilingual", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-05-22T07:27:36Z
--- license: mit license_link: https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/resolve/main/LICENSE language: - multilingual pipeline_tag: text-generation tags: - nlp - code inference: parameters: temperature: 0.7 widget: - messages: - role: user content: Can you provide ways to eat combinations of bananas and dragonfruits? --- **Exllamav2** quant (**exl2** / **6.5 bpw**) made with ExLlamaV2 v0.0.21 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-2_2bpw_exl2)**</center> | <center>4032 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-2_5bpw_exl2)**</center> | <center>4496 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-3_0bpw_exl2)**</center> | <center>5312 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-3_5bpw_exl2)**</center> | <center>6124 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-3_75bpw_exl2)**</center> | <center>6529 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-4_0bpw_exl2)**</center> | <center>6936 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-4_25bpw_exl2)**</center> | <center>7343 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-5_0bpw_exl2)**</center> | <center>8559 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-6_0bpw_exl2)**</center> | <center>10212 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-6_5bpw_exl2)**</center> | <center>11033 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-8_0bpw_exl2)**</center> | <center>12810 MB</center> | <center>8</center> | ## Model Summary The Phi-3-Medium-128K-Instruct is a 14B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. The model belongs to the Phi-3 family with the Medium version in two variants [4k](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) which is the context length (in tokens) that it can support. The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures. When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3-Medium-128K-Instruct showcased a robust and state-of-the-art performance among models of the same-size and next-size-up. Resources and Technical Documentation: + [Phi-3 Microsoft Blog](https://aka.ms/Phi-3Build2024) + [Phi-3 Technical Report](https://aka.ms/phi3-tech-report) + [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai) + [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook) | | Short Context | Long Context | | ------- | ------------- | ------------ | | Mini | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx) ; [[GGUF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct-onnx)| | Small | 8K [[HF]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct-onnx-cuda)| | Medium | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda)| | Vision | | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct)| ## Intended Uses **Primary use cases** The model is intended for broad commercial and research use in English. The model provides uses for general purpose AI systems and applications which require : 1) Memory/compute constrained environments 2) Latency bound scenarios 3) Strong reasoning (especially code, math and logic) Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features. **Use case considerations** Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case. Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under. ## How to Use Phi-3-Medium-128k-Instruct has been integrated in the development version (4.40.2) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following: * When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function. * Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source. The current `transformers` version can be verified with: `pip list | grep transformers`. Phi-3-Medium-128k-Instruct is also available in [Azure AI Studio](https://aka.ms/phi3-azure-ai). ### Tokenizer Phi-3-Medium-128k-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size. ### Chat Format Given the nature of the training data, the Phi-3-Medium-128k-Instruct model is best suited for prompts using the chat format as follows. You can provide the prompt as a question with a generic template as follow: ```markdown <|user|>\nQuestion <|end|>\n<|assistant|> ``` For example: ```markdown <|user|> How to explain Internet for a medieval knight?<|end|> <|assistant|> ``` where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following: ```markdown <|user|> I am going to Paris, what should I see?<|end|> <|assistant|> Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|> <|user|> What is so great about #1?<|end|> <|assistant|> ``` ### Sample inference code This code snippets show how to get quickly started with running the model on a GPU: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline torch.random.manual_seed(0) model_id = "microsoft/Phi-3-medium-128k-instruct" model = AutoModelForCausalLM.from_pretrained( model_id, device_map="cuda", torch_dtype="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_id) messages = [ {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}, {"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."}, {"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"}, ] pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, ) generation_args = { "max_new_tokens": 500, "return_full_text": False, "temperature": 0.0, "do_sample": False, } output = pipe(messages, **generation_args) print(output[0]['generated_text']) ``` *Some applications/frameworks might not include a BOS token (`<s>`) at the start of the conversation. Please ensure that it is included since it provides more reliable results.* ## Responsible AI Considerations Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include: + Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. + Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. + Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case. + Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. + Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses. Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include: + Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques. + High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. + Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). + Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. + Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations. ## Training ### Model * Architecture: Phi-3-Medium-128k-Instruct has 14B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines. * Inputs: Text. It is best suited for prompts using chat format. * Context length: 128k tokens * GPUs: 512 H100-80G * Training time: 42 days * Training data: 4.8T tokens * Outputs: Generated text in response to the input * Dates: Our models were trained between February and April 2024 * Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models. * Release dates: The model weight is released on May 21, 2024. ### Datasets Our training data includes a wide variety of sources, totaling 4.8 trillion tokens (including 10% multilingual), and is a combination of 1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code; 2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.); 3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness. We are focusing on the quality of data that could potentially improve the reasoning ability for the model, and we filter the publicly available documents to contain the correct level of knowledge. As an example, the result of a game in premier league in a particular day might be good training data for frontier models, but we need to remove such information to leave more model capacity for reasoning for the small size models. More details about data can be found in the [Phi-3 Technical Report](https://aka.ms/phi3-tech-report). ## Benchmarks We report the results for Phi-3-Medium-128k-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Mixtral-8x22b, Gemini-Pro, Command R+ 104B, Llama-3-70B-Instruct, GPT-3.5-Turbo-1106, and GPT-4-Turbo-1106(Chat). All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation. As is now standard, we use few-shot prompts to evaluate the models, at temperature 0. The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3. More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model. The number of k–shot examples is listed per-benchmark. |Benchmark|Phi-3-Medium-128k-Instruct<br>14b|Command R+<br>104B|Mixtral<br>8x22B|Llama-3-70B-Instruct|GPT3.5-Turbo<br>version 1106|Gemini<br>Pro|GPT-4-Turbo<br>version 1106 (Chat)| |---------|-----------------------|--------|-------------|-------------------|-------------------|----------|------------------------| |AGI Eval<br>5-shot|49.7|50.1|54.0|56.9|48.4|49.0|59.6| |MMLU<br>5-shot|76.6|73.8|76.2|80.2|71.4|66.7|84.0| |BigBench Hard<br>3-shot|77.9|74.1|81.8|80.4|68.3|75.6|87.7| |ANLI<br>7-shot|57.3|63.4|65.2|68.3|58.1|64.2|71.7| |HellaSwag<br>5-shot|81.6|78.0|79.0|82.6|78.8|76.2|88.3| |ARC Challenge<br>10-shot|91.0|86.9|91.3|93.0|87.4|88.3|95.6| |ARC Easy<br>10-shot|97.6|95.7|96.9|98.2|96.3|96.1|98.8| |BoolQ<br>2-shot|86.5|86.1|82.7|89.1|79.1|86.4|91.3| |CommonsenseQA<br>10-shot|82.2|82.0|82.0|84.4|79.6|81.8|86.7| |MedQA<br>2-shot|67.6|59.2|67.9|78.5|63.4|58.2|83.7| |OpenBookQA<br>10-shot|87.2|86.8|88.6|91.8|86.0|86.4|93.4| |PIQA<br>5-shot|87.8|86.4|85.0|85.3|86.6|86.2|90.1| |Social IQA<br>5-shot|79.0|75.3|78.2|81.1|68.3|75.4|81.7| |TruthfulQA (MC2)<br>10-shot|74.3|57.8|67.4|81.9|67.7|72.6|85.2| |WinoGrande<br>5-shot|78.9|77.0|75.3|83.3|68.8|72.2|86.7| |TriviaQA<br>5-shot|73.9|82.8|84.5|78.5|85.8|80.2|73.3| |GSM8K Chain of Thought<br>8-shot|87.5|78.3|83.8|93.5|78.1|80.4|94.2| |HumanEval<br>0-shot|58.5|61.6|39.6|78.7|62.2|64.4|79.9| |MBPP<br>3-shot|73.8|68.9|70.7|81.3|77.8|73.2|86.7| |Average|77.3|75.0|76.3|82.5|74.3|75.4|85.2| We take a closer look at different categories across 80 public benchmark datasets at the table below: |Benchmark|Phi-3-Medium-128k-Instruct<br>14b|Command R+<br>104B|Mixtral<br>8x22B|Llama-3-70B-Instruct|GPT3.5-Turbo<br>version 1106|Gemini<br>Pro|GPT-4-Turbo<br>version 1106 (Chat)| |--------|------------------------|--------|-------------|-------------------|-------------------|----------|------------------------| | Popular aggregated benchmark | 72.3 | 69.9 | 73.4 | 76.3 | 67.0 | 67.5 | 80.5 | | Reasoning | 83.2 | 79.3 | 81.5 | 86.7 | 78.3 | 80.4 | 89.3 | | Language understanding | 75.3 | 75.7 | 78.7 | 77.9 | 70.4 | 75.3 | 81.6 | | Code generation | 64.2 | 68.6 | 60.0 | 69.3 | 70.4 | 66.7 | 76.1 | | Math | 52.9 | 45.3 | 52.5 | 59.7 | 52.8 | 50.9 | 67.1 | | Factual knowledge | 47.5 | 60.3 | 60.6 | 52.4 | 63.4 | 54.6 | 45.9 | | Multilingual | 62.2 | 67.8 | 69.8 | 62.0 | 67.0 | 73.4 | 78.2 | | Robustness | 70.2 | 57.9 | 65.5 | 78.7 | 69.3 | 69.7 | 84.6 | ## Software * [PyTorch](https://github.com/pytorch/pytorch) * [DeepSpeed](https://github.com/microsoft/DeepSpeed) * [Transformers](https://github.com/huggingface/transformers) * [Flash-Attention](https://github.com/HazyResearch/flash-attention) ## Hardware Note that by default, the Phi-3-Medium model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types: * NVIDIA A100 * NVIDIA A6000 * NVIDIA H100 If you want to run the model on: + Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [128k](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda) ## Cross Platform Support ONNX runtime ecosystem now supports Phi3 Medium models across platforms and hardware. Optimized phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML GPU acceleration is supported for Windows desktops GPUs (AMD, Intel, and NVIDIA). Along with DML, ONNX Runtime provides cross platform support for Phi3 Medium across a range of devices CPU, GPU, and mobile. Here are some of the optimized configurations we have added: 1. ONNX models for int4 DML: Quantized to int4 via AWQ 2. ONNX model for fp16 CUDA 3. ONNX model for int4 CUDA: Quantized to int4 via RTN 4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN ## License The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-medium-128k/resolve/main/LICENSE). ## Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
Zoyd/microsoft_Phi-3-medium-128k-instruct-5_0bpw_exl2
Zoyd
2024-05-22T07:42:38Z
4
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "nlp", "code", "conversational", "custom_code", "multilingual", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "5-bit", "exl2", "region:us" ]
text-generation
2024-05-22T06:58:53Z
--- license: mit license_link: https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/resolve/main/LICENSE language: - multilingual pipeline_tag: text-generation tags: - nlp - code inference: parameters: temperature: 0.7 widget: - messages: - role: user content: Can you provide ways to eat combinations of bananas and dragonfruits? --- **Exllamav2** quant (**exl2** / **5.0 bpw**) made with ExLlamaV2 v0.0.21 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-2_2bpw_exl2)**</center> | <center>4032 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-2_5bpw_exl2)**</center> | <center>4496 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-3_0bpw_exl2)**</center> | <center>5312 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-3_5bpw_exl2)**</center> | <center>6124 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-3_75bpw_exl2)**</center> | <center>6529 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-4_0bpw_exl2)**</center> | <center>6936 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-4_25bpw_exl2)**</center> | <center>7343 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-5_0bpw_exl2)**</center> | <center>8559 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-6_0bpw_exl2)**</center> | <center>10212 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-6_5bpw_exl2)**</center> | <center>11033 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-8_0bpw_exl2)**</center> | <center>12810 MB</center> | <center>8</center> | ## Model Summary The Phi-3-Medium-128K-Instruct is a 14B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. The model belongs to the Phi-3 family with the Medium version in two variants [4k](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) which is the context length (in tokens) that it can support. The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures. When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3-Medium-128K-Instruct showcased a robust and state-of-the-art performance among models of the same-size and next-size-up. Resources and Technical Documentation: + [Phi-3 Microsoft Blog](https://aka.ms/Phi-3Build2024) + [Phi-3 Technical Report](https://aka.ms/phi3-tech-report) + [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai) + [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook) | | Short Context | Long Context | | ------- | ------------- | ------------ | | Mini | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx) ; [[GGUF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct-onnx)| | Small | 8K [[HF]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct-onnx-cuda)| | Medium | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda)| | Vision | | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct)| ## Intended Uses **Primary use cases** The model is intended for broad commercial and research use in English. The model provides uses for general purpose AI systems and applications which require : 1) Memory/compute constrained environments 2) Latency bound scenarios 3) Strong reasoning (especially code, math and logic) Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features. **Use case considerations** Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case. Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under. ## How to Use Phi-3-Medium-128k-Instruct has been integrated in the development version (4.40.2) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following: * When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function. * Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source. The current `transformers` version can be verified with: `pip list | grep transformers`. Phi-3-Medium-128k-Instruct is also available in [Azure AI Studio](https://aka.ms/phi3-azure-ai). ### Tokenizer Phi-3-Medium-128k-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size. ### Chat Format Given the nature of the training data, the Phi-3-Medium-128k-Instruct model is best suited for prompts using the chat format as follows. You can provide the prompt as a question with a generic template as follow: ```markdown <|user|>\nQuestion <|end|>\n<|assistant|> ``` For example: ```markdown <|user|> How to explain Internet for a medieval knight?<|end|> <|assistant|> ``` where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following: ```markdown <|user|> I am going to Paris, what should I see?<|end|> <|assistant|> Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|> <|user|> What is so great about #1?<|end|> <|assistant|> ``` ### Sample inference code This code snippets show how to get quickly started with running the model on a GPU: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline torch.random.manual_seed(0) model_id = "microsoft/Phi-3-medium-128k-instruct" model = AutoModelForCausalLM.from_pretrained( model_id, device_map="cuda", torch_dtype="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_id) messages = [ {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}, {"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."}, {"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"}, ] pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, ) generation_args = { "max_new_tokens": 500, "return_full_text": False, "temperature": 0.0, "do_sample": False, } output = pipe(messages, **generation_args) print(output[0]['generated_text']) ``` *Some applications/frameworks might not include a BOS token (`<s>`) at the start of the conversation. Please ensure that it is included since it provides more reliable results.* ## Responsible AI Considerations Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include: + Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. + Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. + Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case. + Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. + Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses. Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include: + Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques. + High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. + Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). + Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. + Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations. ## Training ### Model * Architecture: Phi-3-Medium-128k-Instruct has 14B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines. * Inputs: Text. It is best suited for prompts using chat format. * Context length: 128k tokens * GPUs: 512 H100-80G * Training time: 42 days * Training data: 4.8T tokens * Outputs: Generated text in response to the input * Dates: Our models were trained between February and April 2024 * Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models. * Release dates: The model weight is released on May 21, 2024. ### Datasets Our training data includes a wide variety of sources, totaling 4.8 trillion tokens (including 10% multilingual), and is a combination of 1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code; 2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.); 3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness. We are focusing on the quality of data that could potentially improve the reasoning ability for the model, and we filter the publicly available documents to contain the correct level of knowledge. As an example, the result of a game in premier league in a particular day might be good training data for frontier models, but we need to remove such information to leave more model capacity for reasoning for the small size models. More details about data can be found in the [Phi-3 Technical Report](https://aka.ms/phi3-tech-report). ## Benchmarks We report the results for Phi-3-Medium-128k-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Mixtral-8x22b, Gemini-Pro, Command R+ 104B, Llama-3-70B-Instruct, GPT-3.5-Turbo-1106, and GPT-4-Turbo-1106(Chat). All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation. As is now standard, we use few-shot prompts to evaluate the models, at temperature 0. The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3. More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model. The number of k–shot examples is listed per-benchmark. |Benchmark|Phi-3-Medium-128k-Instruct<br>14b|Command R+<br>104B|Mixtral<br>8x22B|Llama-3-70B-Instruct|GPT3.5-Turbo<br>version 1106|Gemini<br>Pro|GPT-4-Turbo<br>version 1106 (Chat)| |---------|-----------------------|--------|-------------|-------------------|-------------------|----------|------------------------| |AGI Eval<br>5-shot|49.7|50.1|54.0|56.9|48.4|49.0|59.6| |MMLU<br>5-shot|76.6|73.8|76.2|80.2|71.4|66.7|84.0| |BigBench Hard<br>3-shot|77.9|74.1|81.8|80.4|68.3|75.6|87.7| |ANLI<br>7-shot|57.3|63.4|65.2|68.3|58.1|64.2|71.7| |HellaSwag<br>5-shot|81.6|78.0|79.0|82.6|78.8|76.2|88.3| |ARC Challenge<br>10-shot|91.0|86.9|91.3|93.0|87.4|88.3|95.6| |ARC Easy<br>10-shot|97.6|95.7|96.9|98.2|96.3|96.1|98.8| |BoolQ<br>2-shot|86.5|86.1|82.7|89.1|79.1|86.4|91.3| |CommonsenseQA<br>10-shot|82.2|82.0|82.0|84.4|79.6|81.8|86.7| |MedQA<br>2-shot|67.6|59.2|67.9|78.5|63.4|58.2|83.7| |OpenBookQA<br>10-shot|87.2|86.8|88.6|91.8|86.0|86.4|93.4| |PIQA<br>5-shot|87.8|86.4|85.0|85.3|86.6|86.2|90.1| |Social IQA<br>5-shot|79.0|75.3|78.2|81.1|68.3|75.4|81.7| |TruthfulQA (MC2)<br>10-shot|74.3|57.8|67.4|81.9|67.7|72.6|85.2| |WinoGrande<br>5-shot|78.9|77.0|75.3|83.3|68.8|72.2|86.7| |TriviaQA<br>5-shot|73.9|82.8|84.5|78.5|85.8|80.2|73.3| |GSM8K Chain of Thought<br>8-shot|87.5|78.3|83.8|93.5|78.1|80.4|94.2| |HumanEval<br>0-shot|58.5|61.6|39.6|78.7|62.2|64.4|79.9| |MBPP<br>3-shot|73.8|68.9|70.7|81.3|77.8|73.2|86.7| |Average|77.3|75.0|76.3|82.5|74.3|75.4|85.2| We take a closer look at different categories across 80 public benchmark datasets at the table below: |Benchmark|Phi-3-Medium-128k-Instruct<br>14b|Command R+<br>104B|Mixtral<br>8x22B|Llama-3-70B-Instruct|GPT3.5-Turbo<br>version 1106|Gemini<br>Pro|GPT-4-Turbo<br>version 1106 (Chat)| |--------|------------------------|--------|-------------|-------------------|-------------------|----------|------------------------| | Popular aggregated benchmark | 72.3 | 69.9 | 73.4 | 76.3 | 67.0 | 67.5 | 80.5 | | Reasoning | 83.2 | 79.3 | 81.5 | 86.7 | 78.3 | 80.4 | 89.3 | | Language understanding | 75.3 | 75.7 | 78.7 | 77.9 | 70.4 | 75.3 | 81.6 | | Code generation | 64.2 | 68.6 | 60.0 | 69.3 | 70.4 | 66.7 | 76.1 | | Math | 52.9 | 45.3 | 52.5 | 59.7 | 52.8 | 50.9 | 67.1 | | Factual knowledge | 47.5 | 60.3 | 60.6 | 52.4 | 63.4 | 54.6 | 45.9 | | Multilingual | 62.2 | 67.8 | 69.8 | 62.0 | 67.0 | 73.4 | 78.2 | | Robustness | 70.2 | 57.9 | 65.5 | 78.7 | 69.3 | 69.7 | 84.6 | ## Software * [PyTorch](https://github.com/pytorch/pytorch) * [DeepSpeed](https://github.com/microsoft/DeepSpeed) * [Transformers](https://github.com/huggingface/transformers) * [Flash-Attention](https://github.com/HazyResearch/flash-attention) ## Hardware Note that by default, the Phi-3-Medium model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types: * NVIDIA A100 * NVIDIA A6000 * NVIDIA H100 If you want to run the model on: + Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [128k](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda) ## Cross Platform Support ONNX runtime ecosystem now supports Phi3 Medium models across platforms and hardware. Optimized phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML GPU acceleration is supported for Windows desktops GPUs (AMD, Intel, and NVIDIA). Along with DML, ONNX Runtime provides cross platform support for Phi3 Medium across a range of devices CPU, GPU, and mobile. Here are some of the optimized configurations we have added: 1. ONNX models for int4 DML: Quantized to int4 via AWQ 2. ONNX model for fp16 CUDA 3. ONNX model for int4 CUDA: Quantized to int4 via RTN 4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN ## License The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-medium-128k/resolve/main/LICENSE). ## Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
Zoyd/microsoft_Phi-3-medium-128k-instruct-3_75bpw_exl2
Zoyd
2024-05-22T07:42:37Z
4
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "nlp", "code", "conversational", "custom_code", "multilingual", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-05-22T06:15:56Z
--- license: mit license_link: https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/resolve/main/LICENSE language: - multilingual pipeline_tag: text-generation tags: - nlp - code inference: parameters: temperature: 0.7 widget: - messages: - role: user content: Can you provide ways to eat combinations of bananas and dragonfruits? --- **Exllamav2** quant (**exl2** / **3.75 bpw**) made with ExLlamaV2 v0.0.21 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-2_2bpw_exl2)**</center> | <center>4032 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-2_5bpw_exl2)**</center> | <center>4496 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-3_0bpw_exl2)**</center> | <center>5312 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-3_5bpw_exl2)**</center> | <center>6124 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-3_75bpw_exl2)**</center> | <center>6529 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-4_0bpw_exl2)**</center> | <center>6936 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-4_25bpw_exl2)**</center> | <center>7343 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-5_0bpw_exl2)**</center> | <center>8559 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-6_0bpw_exl2)**</center> | <center>10212 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-6_5bpw_exl2)**</center> | <center>11033 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-8_0bpw_exl2)**</center> | <center>12810 MB</center> | <center>8</center> | ## Model Summary The Phi-3-Medium-128K-Instruct is a 14B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. The model belongs to the Phi-3 family with the Medium version in two variants [4k](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) which is the context length (in tokens) that it can support. The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures. When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3-Medium-128K-Instruct showcased a robust and state-of-the-art performance among models of the same-size and next-size-up. Resources and Technical Documentation: + [Phi-3 Microsoft Blog](https://aka.ms/Phi-3Build2024) + [Phi-3 Technical Report](https://aka.ms/phi3-tech-report) + [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai) + [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook) | | Short Context | Long Context | | ------- | ------------- | ------------ | | Mini | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx) ; [[GGUF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct-onnx)| | Small | 8K [[HF]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct-onnx-cuda)| | Medium | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda)| | Vision | | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct)| ## Intended Uses **Primary use cases** The model is intended for broad commercial and research use in English. The model provides uses for general purpose AI systems and applications which require : 1) Memory/compute constrained environments 2) Latency bound scenarios 3) Strong reasoning (especially code, math and logic) Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features. **Use case considerations** Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case. Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under. ## How to Use Phi-3-Medium-128k-Instruct has been integrated in the development version (4.40.2) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following: * When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function. * Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source. The current `transformers` version can be verified with: `pip list | grep transformers`. Phi-3-Medium-128k-Instruct is also available in [Azure AI Studio](https://aka.ms/phi3-azure-ai). ### Tokenizer Phi-3-Medium-128k-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size. ### Chat Format Given the nature of the training data, the Phi-3-Medium-128k-Instruct model is best suited for prompts using the chat format as follows. You can provide the prompt as a question with a generic template as follow: ```markdown <|user|>\nQuestion <|end|>\n<|assistant|> ``` For example: ```markdown <|user|> How to explain Internet for a medieval knight?<|end|> <|assistant|> ``` where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following: ```markdown <|user|> I am going to Paris, what should I see?<|end|> <|assistant|> Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|> <|user|> What is so great about #1?<|end|> <|assistant|> ``` ### Sample inference code This code snippets show how to get quickly started with running the model on a GPU: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline torch.random.manual_seed(0) model_id = "microsoft/Phi-3-medium-128k-instruct" model = AutoModelForCausalLM.from_pretrained( model_id, device_map="cuda", torch_dtype="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_id) messages = [ {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}, {"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."}, {"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"}, ] pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, ) generation_args = { "max_new_tokens": 500, "return_full_text": False, "temperature": 0.0, "do_sample": False, } output = pipe(messages, **generation_args) print(output[0]['generated_text']) ``` *Some applications/frameworks might not include a BOS token (`<s>`) at the start of the conversation. Please ensure that it is included since it provides more reliable results.* ## Responsible AI Considerations Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include: + Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. + Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. + Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case. + Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. + Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses. Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include: + Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques. + High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. + Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). + Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. + Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations. ## Training ### Model * Architecture: Phi-3-Medium-128k-Instruct has 14B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines. * Inputs: Text. It is best suited for prompts using chat format. * Context length: 128k tokens * GPUs: 512 H100-80G * Training time: 42 days * Training data: 4.8T tokens * Outputs: Generated text in response to the input * Dates: Our models were trained between February and April 2024 * Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models. * Release dates: The model weight is released on May 21, 2024. ### Datasets Our training data includes a wide variety of sources, totaling 4.8 trillion tokens (including 10% multilingual), and is a combination of 1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code; 2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.); 3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness. We are focusing on the quality of data that could potentially improve the reasoning ability for the model, and we filter the publicly available documents to contain the correct level of knowledge. As an example, the result of a game in premier league in a particular day might be good training data for frontier models, but we need to remove such information to leave more model capacity for reasoning for the small size models. More details about data can be found in the [Phi-3 Technical Report](https://aka.ms/phi3-tech-report). ## Benchmarks We report the results for Phi-3-Medium-128k-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Mixtral-8x22b, Gemini-Pro, Command R+ 104B, Llama-3-70B-Instruct, GPT-3.5-Turbo-1106, and GPT-4-Turbo-1106(Chat). All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation. As is now standard, we use few-shot prompts to evaluate the models, at temperature 0. The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3. More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model. The number of k–shot examples is listed per-benchmark. |Benchmark|Phi-3-Medium-128k-Instruct<br>14b|Command R+<br>104B|Mixtral<br>8x22B|Llama-3-70B-Instruct|GPT3.5-Turbo<br>version 1106|Gemini<br>Pro|GPT-4-Turbo<br>version 1106 (Chat)| |---------|-----------------------|--------|-------------|-------------------|-------------------|----------|------------------------| |AGI Eval<br>5-shot|49.7|50.1|54.0|56.9|48.4|49.0|59.6| |MMLU<br>5-shot|76.6|73.8|76.2|80.2|71.4|66.7|84.0| |BigBench Hard<br>3-shot|77.9|74.1|81.8|80.4|68.3|75.6|87.7| |ANLI<br>7-shot|57.3|63.4|65.2|68.3|58.1|64.2|71.7| |HellaSwag<br>5-shot|81.6|78.0|79.0|82.6|78.8|76.2|88.3| |ARC Challenge<br>10-shot|91.0|86.9|91.3|93.0|87.4|88.3|95.6| |ARC Easy<br>10-shot|97.6|95.7|96.9|98.2|96.3|96.1|98.8| |BoolQ<br>2-shot|86.5|86.1|82.7|89.1|79.1|86.4|91.3| |CommonsenseQA<br>10-shot|82.2|82.0|82.0|84.4|79.6|81.8|86.7| |MedQA<br>2-shot|67.6|59.2|67.9|78.5|63.4|58.2|83.7| |OpenBookQA<br>10-shot|87.2|86.8|88.6|91.8|86.0|86.4|93.4| |PIQA<br>5-shot|87.8|86.4|85.0|85.3|86.6|86.2|90.1| |Social IQA<br>5-shot|79.0|75.3|78.2|81.1|68.3|75.4|81.7| |TruthfulQA (MC2)<br>10-shot|74.3|57.8|67.4|81.9|67.7|72.6|85.2| |WinoGrande<br>5-shot|78.9|77.0|75.3|83.3|68.8|72.2|86.7| |TriviaQA<br>5-shot|73.9|82.8|84.5|78.5|85.8|80.2|73.3| |GSM8K Chain of Thought<br>8-shot|87.5|78.3|83.8|93.5|78.1|80.4|94.2| |HumanEval<br>0-shot|58.5|61.6|39.6|78.7|62.2|64.4|79.9| |MBPP<br>3-shot|73.8|68.9|70.7|81.3|77.8|73.2|86.7| |Average|77.3|75.0|76.3|82.5|74.3|75.4|85.2| We take a closer look at different categories across 80 public benchmark datasets at the table below: |Benchmark|Phi-3-Medium-128k-Instruct<br>14b|Command R+<br>104B|Mixtral<br>8x22B|Llama-3-70B-Instruct|GPT3.5-Turbo<br>version 1106|Gemini<br>Pro|GPT-4-Turbo<br>version 1106 (Chat)| |--------|------------------------|--------|-------------|-------------------|-------------------|----------|------------------------| | Popular aggregated benchmark | 72.3 | 69.9 | 73.4 | 76.3 | 67.0 | 67.5 | 80.5 | | Reasoning | 83.2 | 79.3 | 81.5 | 86.7 | 78.3 | 80.4 | 89.3 | | Language understanding | 75.3 | 75.7 | 78.7 | 77.9 | 70.4 | 75.3 | 81.6 | | Code generation | 64.2 | 68.6 | 60.0 | 69.3 | 70.4 | 66.7 | 76.1 | | Math | 52.9 | 45.3 | 52.5 | 59.7 | 52.8 | 50.9 | 67.1 | | Factual knowledge | 47.5 | 60.3 | 60.6 | 52.4 | 63.4 | 54.6 | 45.9 | | Multilingual | 62.2 | 67.8 | 69.8 | 62.0 | 67.0 | 73.4 | 78.2 | | Robustness | 70.2 | 57.9 | 65.5 | 78.7 | 69.3 | 69.7 | 84.6 | ## Software * [PyTorch](https://github.com/pytorch/pytorch) * [DeepSpeed](https://github.com/microsoft/DeepSpeed) * [Transformers](https://github.com/huggingface/transformers) * [Flash-Attention](https://github.com/HazyResearch/flash-attention) ## Hardware Note that by default, the Phi-3-Medium model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types: * NVIDIA A100 * NVIDIA A6000 * NVIDIA H100 If you want to run the model on: + Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [128k](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda) ## Cross Platform Support ONNX runtime ecosystem now supports Phi3 Medium models across platforms and hardware. Optimized phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML GPU acceleration is supported for Windows desktops GPUs (AMD, Intel, and NVIDIA). Along with DML, ONNX Runtime provides cross platform support for Phi3 Medium across a range of devices CPU, GPU, and mobile. Here are some of the optimized configurations we have added: 1. ONNX models for int4 DML: Quantized to int4 via AWQ 2. ONNX model for fp16 CUDA 3. ONNX model for int4 CUDA: Quantized to int4 via RTN 4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN ## License The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-medium-128k/resolve/main/LICENSE). ## Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
Zoyd/microsoft_Phi-3-medium-128k-instruct-4_25bpw_exl2
Zoyd
2024-05-22T07:42:37Z
4
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "nlp", "code", "conversational", "custom_code", "multilingual", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-05-22T06:44:31Z
--- license: mit license_link: https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/resolve/main/LICENSE language: - multilingual pipeline_tag: text-generation tags: - nlp - code inference: parameters: temperature: 0.7 widget: - messages: - role: user content: Can you provide ways to eat combinations of bananas and dragonfruits? --- **Exllamav2** quant (**exl2** / **4.25 bpw**) made with ExLlamaV2 v0.0.21 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-2_2bpw_exl2)**</center> | <center>4032 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-2_5bpw_exl2)**</center> | <center>4496 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-3_0bpw_exl2)**</center> | <center>5312 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-3_5bpw_exl2)**</center> | <center>6124 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-3_75bpw_exl2)**</center> | <center>6529 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-4_0bpw_exl2)**</center> | <center>6936 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-4_25bpw_exl2)**</center> | <center>7343 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-5_0bpw_exl2)**</center> | <center>8559 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-6_0bpw_exl2)**</center> | <center>10212 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-6_5bpw_exl2)**</center> | <center>11033 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-8_0bpw_exl2)**</center> | <center>12810 MB</center> | <center>8</center> | ## Model Summary The Phi-3-Medium-128K-Instruct is a 14B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. The model belongs to the Phi-3 family with the Medium version in two variants [4k](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) which is the context length (in tokens) that it can support. The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures. When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3-Medium-128K-Instruct showcased a robust and state-of-the-art performance among models of the same-size and next-size-up. Resources and Technical Documentation: + [Phi-3 Microsoft Blog](https://aka.ms/Phi-3Build2024) + [Phi-3 Technical Report](https://aka.ms/phi3-tech-report) + [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai) + [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook) | | Short Context | Long Context | | ------- | ------------- | ------------ | | Mini | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx) ; [[GGUF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct-onnx)| | Small | 8K [[HF]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct-onnx-cuda)| | Medium | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda)| | Vision | | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct)| ## Intended Uses **Primary use cases** The model is intended for broad commercial and research use in English. The model provides uses for general purpose AI systems and applications which require : 1) Memory/compute constrained environments 2) Latency bound scenarios 3) Strong reasoning (especially code, math and logic) Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features. **Use case considerations** Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case. Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under. ## How to Use Phi-3-Medium-128k-Instruct has been integrated in the development version (4.40.2) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following: * When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function. * Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source. The current `transformers` version can be verified with: `pip list | grep transformers`. Phi-3-Medium-128k-Instruct is also available in [Azure AI Studio](https://aka.ms/phi3-azure-ai). ### Tokenizer Phi-3-Medium-128k-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size. ### Chat Format Given the nature of the training data, the Phi-3-Medium-128k-Instruct model is best suited for prompts using the chat format as follows. You can provide the prompt as a question with a generic template as follow: ```markdown <|user|>\nQuestion <|end|>\n<|assistant|> ``` For example: ```markdown <|user|> How to explain Internet for a medieval knight?<|end|> <|assistant|> ``` where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following: ```markdown <|user|> I am going to Paris, what should I see?<|end|> <|assistant|> Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|> <|user|> What is so great about #1?<|end|> <|assistant|> ``` ### Sample inference code This code snippets show how to get quickly started with running the model on a GPU: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline torch.random.manual_seed(0) model_id = "microsoft/Phi-3-medium-128k-instruct" model = AutoModelForCausalLM.from_pretrained( model_id, device_map="cuda", torch_dtype="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_id) messages = [ {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}, {"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."}, {"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"}, ] pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, ) generation_args = { "max_new_tokens": 500, "return_full_text": False, "temperature": 0.0, "do_sample": False, } output = pipe(messages, **generation_args) print(output[0]['generated_text']) ``` *Some applications/frameworks might not include a BOS token (`<s>`) at the start of the conversation. Please ensure that it is included since it provides more reliable results.* ## Responsible AI Considerations Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include: + Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. + Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. + Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case. + Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. + Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses. Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include: + Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques. + High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. + Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). + Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. + Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations. ## Training ### Model * Architecture: Phi-3-Medium-128k-Instruct has 14B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines. * Inputs: Text. It is best suited for prompts using chat format. * Context length: 128k tokens * GPUs: 512 H100-80G * Training time: 42 days * Training data: 4.8T tokens * Outputs: Generated text in response to the input * Dates: Our models were trained between February and April 2024 * Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models. * Release dates: The model weight is released on May 21, 2024. ### Datasets Our training data includes a wide variety of sources, totaling 4.8 trillion tokens (including 10% multilingual), and is a combination of 1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code; 2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.); 3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness. We are focusing on the quality of data that could potentially improve the reasoning ability for the model, and we filter the publicly available documents to contain the correct level of knowledge. As an example, the result of a game in premier league in a particular day might be good training data for frontier models, but we need to remove such information to leave more model capacity for reasoning for the small size models. More details about data can be found in the [Phi-3 Technical Report](https://aka.ms/phi3-tech-report). ## Benchmarks We report the results for Phi-3-Medium-128k-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Mixtral-8x22b, Gemini-Pro, Command R+ 104B, Llama-3-70B-Instruct, GPT-3.5-Turbo-1106, and GPT-4-Turbo-1106(Chat). All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation. As is now standard, we use few-shot prompts to evaluate the models, at temperature 0. The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3. More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model. The number of k–shot examples is listed per-benchmark. |Benchmark|Phi-3-Medium-128k-Instruct<br>14b|Command R+<br>104B|Mixtral<br>8x22B|Llama-3-70B-Instruct|GPT3.5-Turbo<br>version 1106|Gemini<br>Pro|GPT-4-Turbo<br>version 1106 (Chat)| |---------|-----------------------|--------|-------------|-------------------|-------------------|----------|------------------------| |AGI Eval<br>5-shot|49.7|50.1|54.0|56.9|48.4|49.0|59.6| |MMLU<br>5-shot|76.6|73.8|76.2|80.2|71.4|66.7|84.0| |BigBench Hard<br>3-shot|77.9|74.1|81.8|80.4|68.3|75.6|87.7| |ANLI<br>7-shot|57.3|63.4|65.2|68.3|58.1|64.2|71.7| |HellaSwag<br>5-shot|81.6|78.0|79.0|82.6|78.8|76.2|88.3| |ARC Challenge<br>10-shot|91.0|86.9|91.3|93.0|87.4|88.3|95.6| |ARC Easy<br>10-shot|97.6|95.7|96.9|98.2|96.3|96.1|98.8| |BoolQ<br>2-shot|86.5|86.1|82.7|89.1|79.1|86.4|91.3| |CommonsenseQA<br>10-shot|82.2|82.0|82.0|84.4|79.6|81.8|86.7| |MedQA<br>2-shot|67.6|59.2|67.9|78.5|63.4|58.2|83.7| |OpenBookQA<br>10-shot|87.2|86.8|88.6|91.8|86.0|86.4|93.4| |PIQA<br>5-shot|87.8|86.4|85.0|85.3|86.6|86.2|90.1| |Social IQA<br>5-shot|79.0|75.3|78.2|81.1|68.3|75.4|81.7| |TruthfulQA (MC2)<br>10-shot|74.3|57.8|67.4|81.9|67.7|72.6|85.2| |WinoGrande<br>5-shot|78.9|77.0|75.3|83.3|68.8|72.2|86.7| |TriviaQA<br>5-shot|73.9|82.8|84.5|78.5|85.8|80.2|73.3| |GSM8K Chain of Thought<br>8-shot|87.5|78.3|83.8|93.5|78.1|80.4|94.2| |HumanEval<br>0-shot|58.5|61.6|39.6|78.7|62.2|64.4|79.9| |MBPP<br>3-shot|73.8|68.9|70.7|81.3|77.8|73.2|86.7| |Average|77.3|75.0|76.3|82.5|74.3|75.4|85.2| We take a closer look at different categories across 80 public benchmark datasets at the table below: |Benchmark|Phi-3-Medium-128k-Instruct<br>14b|Command R+<br>104B|Mixtral<br>8x22B|Llama-3-70B-Instruct|GPT3.5-Turbo<br>version 1106|Gemini<br>Pro|GPT-4-Turbo<br>version 1106 (Chat)| |--------|------------------------|--------|-------------|-------------------|-------------------|----------|------------------------| | Popular aggregated benchmark | 72.3 | 69.9 | 73.4 | 76.3 | 67.0 | 67.5 | 80.5 | | Reasoning | 83.2 | 79.3 | 81.5 | 86.7 | 78.3 | 80.4 | 89.3 | | Language understanding | 75.3 | 75.7 | 78.7 | 77.9 | 70.4 | 75.3 | 81.6 | | Code generation | 64.2 | 68.6 | 60.0 | 69.3 | 70.4 | 66.7 | 76.1 | | Math | 52.9 | 45.3 | 52.5 | 59.7 | 52.8 | 50.9 | 67.1 | | Factual knowledge | 47.5 | 60.3 | 60.6 | 52.4 | 63.4 | 54.6 | 45.9 | | Multilingual | 62.2 | 67.8 | 69.8 | 62.0 | 67.0 | 73.4 | 78.2 | | Robustness | 70.2 | 57.9 | 65.5 | 78.7 | 69.3 | 69.7 | 84.6 | ## Software * [PyTorch](https://github.com/pytorch/pytorch) * [DeepSpeed](https://github.com/microsoft/DeepSpeed) * [Transformers](https://github.com/huggingface/transformers) * [Flash-Attention](https://github.com/HazyResearch/flash-attention) ## Hardware Note that by default, the Phi-3-Medium model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types: * NVIDIA A100 * NVIDIA A6000 * NVIDIA H100 If you want to run the model on: + Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [128k](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda) ## Cross Platform Support ONNX runtime ecosystem now supports Phi3 Medium models across platforms and hardware. Optimized phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML GPU acceleration is supported for Windows desktops GPUs (AMD, Intel, and NVIDIA). Along with DML, ONNX Runtime provides cross platform support for Phi3 Medium across a range of devices CPU, GPU, and mobile. Here are some of the optimized configurations we have added: 1. ONNX models for int4 DML: Quantized to int4 via AWQ 2. ONNX model for fp16 CUDA 3. ONNX model for int4 CUDA: Quantized to int4 via RTN 4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN ## License The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-medium-128k/resolve/main/LICENSE). ## Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
Zoyd/microsoft_Phi-3-medium-128k-instruct-3_0bpw_exl2
Zoyd
2024-05-22T07:42:35Z
6
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "nlp", "code", "conversational", "custom_code", "multilingual", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "3-bit", "exl2", "region:us" ]
text-generation
2024-05-22T05:47:45Z
--- license: mit license_link: https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/resolve/main/LICENSE language: - multilingual pipeline_tag: text-generation tags: - nlp - code inference: parameters: temperature: 0.7 widget: - messages: - role: user content: Can you provide ways to eat combinations of bananas and dragonfruits? --- **Exllamav2** quant (**exl2** / **3.0 bpw**) made with ExLlamaV2 v0.0.21 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-2_2bpw_exl2)**</center> | <center>4032 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-2_5bpw_exl2)**</center> | <center>4496 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-3_0bpw_exl2)**</center> | <center>5312 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-3_5bpw_exl2)**</center> | <center>6124 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-3_75bpw_exl2)**</center> | <center>6529 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-4_0bpw_exl2)**</center> | <center>6936 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-4_25bpw_exl2)**</center> | <center>7343 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-5_0bpw_exl2)**</center> | <center>8559 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-6_0bpw_exl2)**</center> | <center>10212 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-6_5bpw_exl2)**</center> | <center>11033 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-8_0bpw_exl2)**</center> | <center>12810 MB</center> | <center>8</center> | ## Model Summary The Phi-3-Medium-128K-Instruct is a 14B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. The model belongs to the Phi-3 family with the Medium version in two variants [4k](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) which is the context length (in tokens) that it can support. The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures. When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3-Medium-128K-Instruct showcased a robust and state-of-the-art performance among models of the same-size and next-size-up. Resources and Technical Documentation: + [Phi-3 Microsoft Blog](https://aka.ms/Phi-3Build2024) + [Phi-3 Technical Report](https://aka.ms/phi3-tech-report) + [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai) + [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook) | | Short Context | Long Context | | ------- | ------------- | ------------ | | Mini | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx) ; [[GGUF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct-onnx)| | Small | 8K [[HF]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct-onnx-cuda)| | Medium | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda)| | Vision | | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct)| ## Intended Uses **Primary use cases** The model is intended for broad commercial and research use in English. The model provides uses for general purpose AI systems and applications which require : 1) Memory/compute constrained environments 2) Latency bound scenarios 3) Strong reasoning (especially code, math and logic) Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features. **Use case considerations** Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case. Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under. ## How to Use Phi-3-Medium-128k-Instruct has been integrated in the development version (4.40.2) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following: * When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function. * Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source. The current `transformers` version can be verified with: `pip list | grep transformers`. Phi-3-Medium-128k-Instruct is also available in [Azure AI Studio](https://aka.ms/phi3-azure-ai). ### Tokenizer Phi-3-Medium-128k-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size. ### Chat Format Given the nature of the training data, the Phi-3-Medium-128k-Instruct model is best suited for prompts using the chat format as follows. You can provide the prompt as a question with a generic template as follow: ```markdown <|user|>\nQuestion <|end|>\n<|assistant|> ``` For example: ```markdown <|user|> How to explain Internet for a medieval knight?<|end|> <|assistant|> ``` where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following: ```markdown <|user|> I am going to Paris, what should I see?<|end|> <|assistant|> Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|> <|user|> What is so great about #1?<|end|> <|assistant|> ``` ### Sample inference code This code snippets show how to get quickly started with running the model on a GPU: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline torch.random.manual_seed(0) model_id = "microsoft/Phi-3-medium-128k-instruct" model = AutoModelForCausalLM.from_pretrained( model_id, device_map="cuda", torch_dtype="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_id) messages = [ {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}, {"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."}, {"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"}, ] pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, ) generation_args = { "max_new_tokens": 500, "return_full_text": False, "temperature": 0.0, "do_sample": False, } output = pipe(messages, **generation_args) print(output[0]['generated_text']) ``` *Some applications/frameworks might not include a BOS token (`<s>`) at the start of the conversation. Please ensure that it is included since it provides more reliable results.* ## Responsible AI Considerations Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include: + Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. + Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. + Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case. + Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. + Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses. Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include: + Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques. + High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. + Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). + Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. + Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations. ## Training ### Model * Architecture: Phi-3-Medium-128k-Instruct has 14B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines. * Inputs: Text. It is best suited for prompts using chat format. * Context length: 128k tokens * GPUs: 512 H100-80G * Training time: 42 days * Training data: 4.8T tokens * Outputs: Generated text in response to the input * Dates: Our models were trained between February and April 2024 * Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models. * Release dates: The model weight is released on May 21, 2024. ### Datasets Our training data includes a wide variety of sources, totaling 4.8 trillion tokens (including 10% multilingual), and is a combination of 1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code; 2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.); 3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness. We are focusing on the quality of data that could potentially improve the reasoning ability for the model, and we filter the publicly available documents to contain the correct level of knowledge. As an example, the result of a game in premier league in a particular day might be good training data for frontier models, but we need to remove such information to leave more model capacity for reasoning for the small size models. More details about data can be found in the [Phi-3 Technical Report](https://aka.ms/phi3-tech-report). ## Benchmarks We report the results for Phi-3-Medium-128k-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Mixtral-8x22b, Gemini-Pro, Command R+ 104B, Llama-3-70B-Instruct, GPT-3.5-Turbo-1106, and GPT-4-Turbo-1106(Chat). All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation. As is now standard, we use few-shot prompts to evaluate the models, at temperature 0. The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3. More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model. The number of k–shot examples is listed per-benchmark. |Benchmark|Phi-3-Medium-128k-Instruct<br>14b|Command R+<br>104B|Mixtral<br>8x22B|Llama-3-70B-Instruct|GPT3.5-Turbo<br>version 1106|Gemini<br>Pro|GPT-4-Turbo<br>version 1106 (Chat)| |---------|-----------------------|--------|-------------|-------------------|-------------------|----------|------------------------| |AGI Eval<br>5-shot|49.7|50.1|54.0|56.9|48.4|49.0|59.6| |MMLU<br>5-shot|76.6|73.8|76.2|80.2|71.4|66.7|84.0| |BigBench Hard<br>3-shot|77.9|74.1|81.8|80.4|68.3|75.6|87.7| |ANLI<br>7-shot|57.3|63.4|65.2|68.3|58.1|64.2|71.7| |HellaSwag<br>5-shot|81.6|78.0|79.0|82.6|78.8|76.2|88.3| |ARC Challenge<br>10-shot|91.0|86.9|91.3|93.0|87.4|88.3|95.6| |ARC Easy<br>10-shot|97.6|95.7|96.9|98.2|96.3|96.1|98.8| |BoolQ<br>2-shot|86.5|86.1|82.7|89.1|79.1|86.4|91.3| |CommonsenseQA<br>10-shot|82.2|82.0|82.0|84.4|79.6|81.8|86.7| |MedQA<br>2-shot|67.6|59.2|67.9|78.5|63.4|58.2|83.7| |OpenBookQA<br>10-shot|87.2|86.8|88.6|91.8|86.0|86.4|93.4| |PIQA<br>5-shot|87.8|86.4|85.0|85.3|86.6|86.2|90.1| |Social IQA<br>5-shot|79.0|75.3|78.2|81.1|68.3|75.4|81.7| |TruthfulQA (MC2)<br>10-shot|74.3|57.8|67.4|81.9|67.7|72.6|85.2| |WinoGrande<br>5-shot|78.9|77.0|75.3|83.3|68.8|72.2|86.7| |TriviaQA<br>5-shot|73.9|82.8|84.5|78.5|85.8|80.2|73.3| |GSM8K Chain of Thought<br>8-shot|87.5|78.3|83.8|93.5|78.1|80.4|94.2| |HumanEval<br>0-shot|58.5|61.6|39.6|78.7|62.2|64.4|79.9| |MBPP<br>3-shot|73.8|68.9|70.7|81.3|77.8|73.2|86.7| |Average|77.3|75.0|76.3|82.5|74.3|75.4|85.2| We take a closer look at different categories across 80 public benchmark datasets at the table below: |Benchmark|Phi-3-Medium-128k-Instruct<br>14b|Command R+<br>104B|Mixtral<br>8x22B|Llama-3-70B-Instruct|GPT3.5-Turbo<br>version 1106|Gemini<br>Pro|GPT-4-Turbo<br>version 1106 (Chat)| |--------|------------------------|--------|-------------|-------------------|-------------------|----------|------------------------| | Popular aggregated benchmark | 72.3 | 69.9 | 73.4 | 76.3 | 67.0 | 67.5 | 80.5 | | Reasoning | 83.2 | 79.3 | 81.5 | 86.7 | 78.3 | 80.4 | 89.3 | | Language understanding | 75.3 | 75.7 | 78.7 | 77.9 | 70.4 | 75.3 | 81.6 | | Code generation | 64.2 | 68.6 | 60.0 | 69.3 | 70.4 | 66.7 | 76.1 | | Math | 52.9 | 45.3 | 52.5 | 59.7 | 52.8 | 50.9 | 67.1 | | Factual knowledge | 47.5 | 60.3 | 60.6 | 52.4 | 63.4 | 54.6 | 45.9 | | Multilingual | 62.2 | 67.8 | 69.8 | 62.0 | 67.0 | 73.4 | 78.2 | | Robustness | 70.2 | 57.9 | 65.5 | 78.7 | 69.3 | 69.7 | 84.6 | ## Software * [PyTorch](https://github.com/pytorch/pytorch) * [DeepSpeed](https://github.com/microsoft/DeepSpeed) * [Transformers](https://github.com/huggingface/transformers) * [Flash-Attention](https://github.com/HazyResearch/flash-attention) ## Hardware Note that by default, the Phi-3-Medium model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types: * NVIDIA A100 * NVIDIA A6000 * NVIDIA H100 If you want to run the model on: + Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [128k](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda) ## Cross Platform Support ONNX runtime ecosystem now supports Phi3 Medium models across platforms and hardware. Optimized phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML GPU acceleration is supported for Windows desktops GPUs (AMD, Intel, and NVIDIA). Along with DML, ONNX Runtime provides cross platform support for Phi3 Medium across a range of devices CPU, GPU, and mobile. Here are some of the optimized configurations we have added: 1. ONNX models for int4 DML: Quantized to int4 via AWQ 2. ONNX model for fp16 CUDA 3. ONNX model for int4 CUDA: Quantized to int4 via RTN 4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN ## License The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-medium-128k/resolve/main/LICENSE). ## Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
Zoyd/microsoft_Phi-3-medium-128k-instruct-2_5bpw_exl2
Zoyd
2024-05-22T07:42:34Z
4
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "nlp", "code", "conversational", "custom_code", "multilingual", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-05-22T05:33:50Z
--- license: mit license_link: https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/resolve/main/LICENSE language: - multilingual pipeline_tag: text-generation tags: - nlp - code inference: parameters: temperature: 0.7 widget: - messages: - role: user content: Can you provide ways to eat combinations of bananas and dragonfruits? --- **Exllamav2** quant (**exl2** / **2.5 bpw**) made with ExLlamaV2 v0.0.21 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-2_2bpw_exl2)**</center> | <center>4032 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-2_5bpw_exl2)**</center> | <center>4496 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-3_0bpw_exl2)**</center> | <center>5312 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-3_5bpw_exl2)**</center> | <center>6124 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-3_75bpw_exl2)**</center> | <center>6529 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-4_0bpw_exl2)**</center> | <center>6936 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-4_25bpw_exl2)**</center> | <center>7343 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-5_0bpw_exl2)**</center> | <center>8559 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-6_0bpw_exl2)**</center> | <center>10212 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-6_5bpw_exl2)**</center> | <center>11033 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-8_0bpw_exl2)**</center> | <center>12810 MB</center> | <center>8</center> | ## Model Summary The Phi-3-Medium-128K-Instruct is a 14B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. The model belongs to the Phi-3 family with the Medium version in two variants [4k](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) which is the context length (in tokens) that it can support. The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures. When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3-Medium-128K-Instruct showcased a robust and state-of-the-art performance among models of the same-size and next-size-up. Resources and Technical Documentation: + [Phi-3 Microsoft Blog](https://aka.ms/Phi-3Build2024) + [Phi-3 Technical Report](https://aka.ms/phi3-tech-report) + [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai) + [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook) | | Short Context | Long Context | | ------- | ------------- | ------------ | | Mini | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx) ; [[GGUF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct-onnx)| | Small | 8K [[HF]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct-onnx-cuda)| | Medium | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda)| | Vision | | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct)| ## Intended Uses **Primary use cases** The model is intended for broad commercial and research use in English. The model provides uses for general purpose AI systems and applications which require : 1) Memory/compute constrained environments 2) Latency bound scenarios 3) Strong reasoning (especially code, math and logic) Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features. **Use case considerations** Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case. Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under. ## How to Use Phi-3-Medium-128k-Instruct has been integrated in the development version (4.40.2) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following: * When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function. * Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source. The current `transformers` version can be verified with: `pip list | grep transformers`. Phi-3-Medium-128k-Instruct is also available in [Azure AI Studio](https://aka.ms/phi3-azure-ai). ### Tokenizer Phi-3-Medium-128k-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size. ### Chat Format Given the nature of the training data, the Phi-3-Medium-128k-Instruct model is best suited for prompts using the chat format as follows. You can provide the prompt as a question with a generic template as follow: ```markdown <|user|>\nQuestion <|end|>\n<|assistant|> ``` For example: ```markdown <|user|> How to explain Internet for a medieval knight?<|end|> <|assistant|> ``` where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following: ```markdown <|user|> I am going to Paris, what should I see?<|end|> <|assistant|> Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|> <|user|> What is so great about #1?<|end|> <|assistant|> ``` ### Sample inference code This code snippets show how to get quickly started with running the model on a GPU: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline torch.random.manual_seed(0) model_id = "microsoft/Phi-3-medium-128k-instruct" model = AutoModelForCausalLM.from_pretrained( model_id, device_map="cuda", torch_dtype="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_id) messages = [ {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}, {"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."}, {"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"}, ] pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, ) generation_args = { "max_new_tokens": 500, "return_full_text": False, "temperature": 0.0, "do_sample": False, } output = pipe(messages, **generation_args) print(output[0]['generated_text']) ``` *Some applications/frameworks might not include a BOS token (`<s>`) at the start of the conversation. Please ensure that it is included since it provides more reliable results.* ## Responsible AI Considerations Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include: + Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. + Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. + Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case. + Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. + Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses. Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include: + Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques. + High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. + Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). + Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. + Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations. ## Training ### Model * Architecture: Phi-3-Medium-128k-Instruct has 14B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines. * Inputs: Text. It is best suited for prompts using chat format. * Context length: 128k tokens * GPUs: 512 H100-80G * Training time: 42 days * Training data: 4.8T tokens * Outputs: Generated text in response to the input * Dates: Our models were trained between February and April 2024 * Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models. * Release dates: The model weight is released on May 21, 2024. ### Datasets Our training data includes a wide variety of sources, totaling 4.8 trillion tokens (including 10% multilingual), and is a combination of 1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code; 2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.); 3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness. We are focusing on the quality of data that could potentially improve the reasoning ability for the model, and we filter the publicly available documents to contain the correct level of knowledge. As an example, the result of a game in premier league in a particular day might be good training data for frontier models, but we need to remove such information to leave more model capacity for reasoning for the small size models. More details about data can be found in the [Phi-3 Technical Report](https://aka.ms/phi3-tech-report). ## Benchmarks We report the results for Phi-3-Medium-128k-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Mixtral-8x22b, Gemini-Pro, Command R+ 104B, Llama-3-70B-Instruct, GPT-3.5-Turbo-1106, and GPT-4-Turbo-1106(Chat). All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation. As is now standard, we use few-shot prompts to evaluate the models, at temperature 0. The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3. More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model. The number of k–shot examples is listed per-benchmark. |Benchmark|Phi-3-Medium-128k-Instruct<br>14b|Command R+<br>104B|Mixtral<br>8x22B|Llama-3-70B-Instruct|GPT3.5-Turbo<br>version 1106|Gemini<br>Pro|GPT-4-Turbo<br>version 1106 (Chat)| |---------|-----------------------|--------|-------------|-------------------|-------------------|----------|------------------------| |AGI Eval<br>5-shot|49.7|50.1|54.0|56.9|48.4|49.0|59.6| |MMLU<br>5-shot|76.6|73.8|76.2|80.2|71.4|66.7|84.0| |BigBench Hard<br>3-shot|77.9|74.1|81.8|80.4|68.3|75.6|87.7| |ANLI<br>7-shot|57.3|63.4|65.2|68.3|58.1|64.2|71.7| |HellaSwag<br>5-shot|81.6|78.0|79.0|82.6|78.8|76.2|88.3| |ARC Challenge<br>10-shot|91.0|86.9|91.3|93.0|87.4|88.3|95.6| |ARC Easy<br>10-shot|97.6|95.7|96.9|98.2|96.3|96.1|98.8| |BoolQ<br>2-shot|86.5|86.1|82.7|89.1|79.1|86.4|91.3| |CommonsenseQA<br>10-shot|82.2|82.0|82.0|84.4|79.6|81.8|86.7| |MedQA<br>2-shot|67.6|59.2|67.9|78.5|63.4|58.2|83.7| |OpenBookQA<br>10-shot|87.2|86.8|88.6|91.8|86.0|86.4|93.4| |PIQA<br>5-shot|87.8|86.4|85.0|85.3|86.6|86.2|90.1| |Social IQA<br>5-shot|79.0|75.3|78.2|81.1|68.3|75.4|81.7| |TruthfulQA (MC2)<br>10-shot|74.3|57.8|67.4|81.9|67.7|72.6|85.2| |WinoGrande<br>5-shot|78.9|77.0|75.3|83.3|68.8|72.2|86.7| |TriviaQA<br>5-shot|73.9|82.8|84.5|78.5|85.8|80.2|73.3| |GSM8K Chain of Thought<br>8-shot|87.5|78.3|83.8|93.5|78.1|80.4|94.2| |HumanEval<br>0-shot|58.5|61.6|39.6|78.7|62.2|64.4|79.9| |MBPP<br>3-shot|73.8|68.9|70.7|81.3|77.8|73.2|86.7| |Average|77.3|75.0|76.3|82.5|74.3|75.4|85.2| We take a closer look at different categories across 80 public benchmark datasets at the table below: |Benchmark|Phi-3-Medium-128k-Instruct<br>14b|Command R+<br>104B|Mixtral<br>8x22B|Llama-3-70B-Instruct|GPT3.5-Turbo<br>version 1106|Gemini<br>Pro|GPT-4-Turbo<br>version 1106 (Chat)| |--------|------------------------|--------|-------------|-------------------|-------------------|----------|------------------------| | Popular aggregated benchmark | 72.3 | 69.9 | 73.4 | 76.3 | 67.0 | 67.5 | 80.5 | | Reasoning | 83.2 | 79.3 | 81.5 | 86.7 | 78.3 | 80.4 | 89.3 | | Language understanding | 75.3 | 75.7 | 78.7 | 77.9 | 70.4 | 75.3 | 81.6 | | Code generation | 64.2 | 68.6 | 60.0 | 69.3 | 70.4 | 66.7 | 76.1 | | Math | 52.9 | 45.3 | 52.5 | 59.7 | 52.8 | 50.9 | 67.1 | | Factual knowledge | 47.5 | 60.3 | 60.6 | 52.4 | 63.4 | 54.6 | 45.9 | | Multilingual | 62.2 | 67.8 | 69.8 | 62.0 | 67.0 | 73.4 | 78.2 | | Robustness | 70.2 | 57.9 | 65.5 | 78.7 | 69.3 | 69.7 | 84.6 | ## Software * [PyTorch](https://github.com/pytorch/pytorch) * [DeepSpeed](https://github.com/microsoft/DeepSpeed) * [Transformers](https://github.com/huggingface/transformers) * [Flash-Attention](https://github.com/HazyResearch/flash-attention) ## Hardware Note that by default, the Phi-3-Medium model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types: * NVIDIA A100 * NVIDIA A6000 * NVIDIA H100 If you want to run the model on: + Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [128k](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda) ## Cross Platform Support ONNX runtime ecosystem now supports Phi3 Medium models across platforms and hardware. Optimized phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML GPU acceleration is supported for Windows desktops GPUs (AMD, Intel, and NVIDIA). Along with DML, ONNX Runtime provides cross platform support for Phi3 Medium across a range of devices CPU, GPU, and mobile. Here are some of the optimized configurations we have added: 1. ONNX models for int4 DML: Quantized to int4 via AWQ 2. ONNX model for fp16 CUDA 3. ONNX model for int4 CUDA: Quantized to int4 via RTN 4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN ## License The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-medium-128k/resolve/main/LICENSE). ## Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
Salekeen/Phi-3-mini-128k-instruct-Q4_K_M-GGUF
Salekeen
2024-05-22T07:40:59Z
0
0
null
[ "gguf", "nlp", "code", "llama-cpp", "gguf-my-repo", "text-generation", "en", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-05-22T07:40:47Z
--- language: - en license: mit tags: - nlp - code - llama-cpp - gguf-my-repo license_link: https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/LICENSE pipeline_tag: text-generation widget: - messages: - role: user content: Can you provide ways to eat combinations of bananas and dragonfruits? --- # Salekeen/Phi-3-mini-128k-instruct-Q4_K_M-GGUF This model was converted to GGUF format from [`microsoft/Phi-3-mini-128k-instruct`](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo Salekeen/Phi-3-mini-128k-instruct-Q4_K_M-GGUF --model phi-3-mini-128k-instruct.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo Salekeen/Phi-3-mini-128k-instruct-Q4_K_M-GGUF --model phi-3-mini-128k-instruct.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m phi-3-mini-128k-instruct.Q4_K_M.gguf -n 128 ```