Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
sequencelengths
1
1.84k
sha
null
created_at
stringlengths
25
25
text-generation
transformers
# Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline torch.random.manual_seed(0) model = AutoModelForCausalLM.from_pretrained( "styalai/competition-math-phinetune-v1", q device_map="cuda", torch_dtype="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained("styalai/competition-math-phinetune-v1") messages = [ {"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"}, ] pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, ) generation_args = { "max_new_tokens": 500, "return_full_text": False, "temperature": 0.0, "do_sample": False, } output = pipe(messages, **generation_args) print(output[0]['generated_text']) ``` # Info Fine-tune from styalai/phi-ne-tuning-1-4 who it fine tune from phi-3 parameters of autotrain : ```python project_name = 'competition-math-phinetune-v1' # @param {type:"string"} model_name = "styalai/phi-ne-tuning-1-4" #'microsoft/Phi-3-mini-4k-instruct' # @param {type:"string"} #@markdown --- #@markdown #### Push to Hub? #@markdown Use these only if you want to push your trained model to a private repo in your Hugging Face Account #@markdown If you dont use these, the model will be saved in Google Colab and you are required to download it manually. #@markdown Please enter your Hugging Face write token. The trained model will be saved to your Hugging Face account. #@markdown You can find your token here: https://huggingface.co/settings/tokens push_to_hub = True # @param ["False", "True"] {type:"raw"} hf_token = "hf_****" #@param {type:"string"} #repo_id = "styalai/phine_tuning_1" #@param {type:"string"} #@markdown --- #@markdown #### Hyperparameters learning_rate = 3e-4 # @param {type:"number"} num_epochs = 1 #@param {type:"number"} batch_size = 1 # @param {type:"slider", min:1, max:32, step:1} block_size = 1024 # @param {type:"number"} trainer = "sft" # @param ["default", "sft"] {type:"raw"} warmup_ratio = 0.1 # @param {type:"number"} weight_decay = 0.01 # @param {type:"number"} gradient_accumulation = 4 # @param {type:"number"} mixed_precision = "fp16" # @param ["fp16", "bf16", "none"] {type:"raw"} peft = True # @param ["False", "True"] {type:"raw"} quantization = "int4" # @param ["int4", "int8", "none"] {type:"raw"} lora_r = 16 #@param {type:"number"} lora_alpha = 32 #@param {type:"number"} lora_dropout = 0.05 #@param {type:"number"} code for the creation of the dataset : from datasets import load_dataset dataset = load_dataset("camel-ai/math")#, streaming=True) import pandas as pd data = {"text":[]} msg1 = dataset["train"]["message_1"] msg2 = dataset["train"]["message_2"] for i in range(3500): user = "<|user|>"+ msg1[i] +"<|end|>\n" phi = "<|assistant|>"+ msg2[i] +"<|end|>" prompt = user+phi data["text"].append(prompt) data = pd.DataFrame.from_dict(data) print(data) #os.mkdir("/kaggle/working/data") data.to_csv('data/dataset.csv', index=False, escapechar='\\') !autotrain llm \ --train \ --username "styalai" \ --merge-adapter \ --model ${MODEL_NAME} \ --project-name ${PROJECT_NAME} \ --data-path data/ \ --text-column text \ --lr ${LEARNING_RATE} \ --batch-size ${BATCH_SIZE} \ --epochs ${NUM_EPOCHS} \ --block-size ${BLOCK_SIZE} \ --warmup-ratio ${WARMUP_RATIO} \ --lora-r ${LORA_R} \ --lora-alpha ${LORA_ALPHA} \ --lora-dropout ${LORA_DROPOUT} \ --weight-decay ${WEIGHT_DECAY} \ --gradient-accumulation ${GRADIENT_ACCUMULATION} \ --quantization ${QUANTIZATION} \ --mixed-precision ${MIXED_PRECISION} \ $( [[ "$PEFT" == "True" ]] && echo "--peft" ) \ $( [[ "$PUSH_TO_HUB" == "True" ]] && echo "--push-to-hub --token ${HF_TOKEN}" )q ``` durée de l’entrainement : 1:07:41
{"language": ["en"], "license": "other", "library_name": "transformers", "tags": ["autotrain", "text-generation-inference", "text-generation", "peft", "Phi 3"], "widget": [{"messages": [{"role": "user", "content": "What is your favorite condiment?"}]}]}
styalai/competition-math-phinetune-v1
null
[ "transformers", "safetensors", "phi3", "text-generation", "autotrain", "text-generation-inference", "peft", "Phi 3", "conversational", "custom_code", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T17:33:55+00:00
null
null
{}
wojtek2288/CED
null
[ "region:us" ]
null
2024-04-30T17:34:17+00:00
null
null
# Pokémon Klassifizierer ## Beschreibung Dies ist eine einfache Machine Learning Anwendung, die Bilder von Pokémon klassifiziert. Lade ein Bild von einem der drei Pokémon Aerodactyl, Charizard oder Victreebel hoch, und die Anwendung wird dir mitteilen, um welches Pokémon es sich handelt und das Vertrauensniveau der Vorhersage angeben. ## Wie es funktioniert - **Schritt 1**: Klicke auf "Durchsuchen" oder ziehe ein Bild in das Upload-Feld. - **Schritt 2**: Das Bild wird automatisch verarbeitet, und das Modell gibt die Klasse und das Vertrauensniveau zurück. ## Modell Das Modell wurde mit TensorFlow trainiert und kann die folgenden Pokémon erkennen: - Aerodactyl - Charizard - Victreebel ## Technische Details Dieser Space verwendet Gradio für die Benutzeroberfläche und TensorFlow für das Machine Learning Modell. Stelle sicher, dass alle Abhängigkeiten in der Datei `requirements.txt` aufgeführt sind. ## Über Erstellt mit Leidenschaft von einem Pokémon-Liebhaber und Machine Learning Enthusiasten. Genieße die Nutzung!
{"title": "Pok\u00e9mon Klassifizierer", "emoji": "\ud83d\udc22", "colorFrom": "pink", "colorTo": "green", "sdk": "gradio", "sdk_version": "4.28.3", "app_file": "app.py", "pinned": false}
anderni2/pokemon
null
[ "region:us" ]
null
2024-04-30T17:34:47+00:00
null
peft
## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0
{"library_name": "peft"}
aliaksei-kankou/t1
null
[ "peft", "region:us" ]
null
2024-04-30T17:36:04+00:00
null
null
{}
codevang/common_voice_15_0-ko
null
[ "region:us" ]
null
2024-04-30T17:36:52+00:00
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Ragab167/m2m_translation_v
null
[ "transformers", "safetensors", "m2m_100", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T17:37:17+00:00
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 125 | 3.0093 | 17.8475 | 2.6854 | 14.028 | 14.2242 | 18.889 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["xsum"], "base_model": "t5-small", "model-index": [{"name": "t5-small-finetuned-xsum", "results": []}]}
phdreg/t5-small-finetuned-xsum
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "dataset:xsum", "base_model:t5-small", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T17:37:19+00:00
text-to-audio
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fil_b64_le5_s8000 This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4119 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - training_steps: 8000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:--------:|:----:|:---------------:| | 0.6071 | 21.7391 | 500 | 0.5213 | | 0.5126 | 43.4783 | 1000 | 0.4507 | | 0.4749 | 65.2174 | 1500 | 0.4311 | | 0.454 | 86.9565 | 2000 | 0.4231 | | 0.443 | 108.6957 | 2500 | 0.4173 | | 0.4376 | 130.4348 | 3000 | 0.4169 | | 0.4287 | 152.1739 | 3500 | 0.4133 | | 0.4264 | 173.9130 | 4000 | 0.4150 | | 0.423 | 195.6522 | 4500 | 0.4134 | | 0.4223 | 217.3913 | 5000 | 0.4113 | | 0.4104 | 239.1304 | 5500 | 0.4098 | | 0.4192 | 260.8696 | 6000 | 0.4106 | | 0.4089 | 282.6087 | 6500 | 0.4122 | | 0.4146 | 304.3478 | 7000 | 0.4115 | | 0.4116 | 326.0870 | 7500 | 0.4111 | | 0.4097 | 347.8261 | 8000 | 0.4119 | ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/speecht5_tts", "model-index": [{"name": "fil_b64_le5_s8000", "results": []}]}
mikhail-panzo/fil_b64_le5_s8000
null
[ "transformers", "tensorboard", "safetensors", "speecht5", "text-to-audio", "generated_from_trainer", "base_model:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-30T17:38:53+00:00
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
martinsinnona/a
null
[ "transformers", "safetensors", "pix2struct", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T17:39:10+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama-7b-sft-DPO This model is a fine-tuned version of [argsearch/llama-7b-sft-float32](https://huggingface.co/argsearch/llama-7b-sft-float32) on the Dahoas/full-hh-rlhf dataset. It achieves the following results on the evaluation set: - Loss: 0.6525 - Rewards/chosen: 0.3315 - Rewards/rejected: 0.1953 - Rewards/accuracies: 0.6080 - Rewards/margins: 0.1362 - Logps/rejected: -633.3815 - Logps/chosen: -690.5654 - Logits/rejected: -1.9212 - Logits/chosen: -1.9766 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6884 | 0.06 | 100 | 0.6886 | 0.0879 | 0.0774 | 0.5647 | 0.0105 | -645.1731 | -714.9250 | -2.7786 | -2.8754 | | 0.6769 | 0.11 | 200 | 0.6809 | 0.2546 | 0.2194 | 0.5747 | 0.0352 | -630.9728 | -698.2556 | -2.6094 | -2.6971 | | 0.6734 | 0.17 | 300 | 0.6755 | 0.2980 | 0.2471 | 0.5833 | 0.0508 | -628.1946 | -693.9142 | -2.5226 | -2.6062 | | 0.6684 | 0.23 | 400 | 0.6713 | 0.3480 | 0.2822 | 0.5888 | 0.0658 | -624.6848 | -688.9108 | -2.4007 | -2.4782 | | 0.6647 | 0.29 | 500 | 0.6671 | 0.3495 | 0.2706 | 0.6048 | 0.0789 | -625.8477 | -688.7593 | -2.3026 | -2.3749 | | 0.6598 | 0.34 | 600 | 0.6636 | 0.3311 | 0.2429 | 0.6058 | 0.0882 | -628.6143 | -690.6030 | -2.1694 | -2.2345 | | 0.6598 | 0.4 | 700 | 0.6606 | 0.2824 | 0.1853 | 0.6106 | 0.0971 | -634.3779 | -695.4718 | -1.9252 | -1.9781 | | 0.6563 | 0.46 | 800 | 0.6585 | 0.3476 | 0.2374 | 0.6071 | 0.1102 | -629.1707 | -688.9521 | -2.0030 | -2.0599 | | 0.6636 | 0.51 | 900 | 0.6572 | 0.3569 | 0.2427 | 0.6119 | 0.1142 | -628.6379 | -688.0209 | -1.9872 | -2.0440 | | 0.6436 | 0.57 | 1000 | 0.6558 | 0.2921 | 0.1732 | 0.6096 | 0.1190 | -635.5912 | -694.4999 | -1.9618 | -2.0181 | | 0.6759 | 0.63 | 1100 | 0.6548 | 0.3436 | 0.2165 | 0.6071 | 0.1272 | -631.2626 | -689.3489 | -1.9627 | -2.0198 | | 0.6679 | 0.69 | 1200 | 0.6542 | 0.3533 | 0.2212 | 0.6077 | 0.1321 | -630.7878 | -688.3820 | -1.9058 | -1.9598 | | 0.6358 | 0.74 | 1300 | 0.6533 | 0.3363 | 0.2036 | 0.6074 | 0.1327 | -632.5449 | -690.0779 | -1.9447 | -2.0015 | | 0.6473 | 0.8 | 1400 | 0.6528 | 0.3378 | 0.2021 | 0.6080 | 0.1357 | -632.6981 | -689.9300 | -1.9072 | -1.9621 | | 0.6447 | 0.86 | 1500 | 0.6526 | 0.3221 | 0.1869 | 0.6080 | 0.1352 | -634.2156 | -691.5005 | -1.9226 | -1.9781 | | 0.6546 | 0.91 | 1600 | 0.6525 | 0.3303 | 0.1941 | 0.6074 | 0.1362 | -633.5018 | -690.6824 | -1.9134 | -1.9684 | | 0.6725 | 0.97 | 1700 | 0.6525 | 0.3312 | 0.1950 | 0.6074 | 0.1363 | -633.4115 | -690.5892 | -1.9098 | -1.9645 | ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.3.0+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
{"tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["Dahoas/full-hh-rlhf"], "base_model": "argsearch/llama-7b-sft-float32", "model-index": [{"name": "llama-7b-sft-DPO", "results": []}]}
AmberYifan/llama-7b-sft-DPO
null
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "dataset:Dahoas/full-hh-rlhf", "base_model:argsearch/llama-7b-sft-float32", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T17:39:41+00:00
text-generation
transformers
# [MaziyarPanahi/Faust-GGUF](https://huggingface.co/MaziyarPanahi/Faust-GGUF) - Model creator: [AgentPublic](https://huggingface.co/AgentPublic) - Original model: [AgentPublic/Faust](https://huggingface.co/AgentPublic/Faust) ## Description [MaziyarPanahi/Faust-GGUF](https://huggingface.co/MaziyarPanahi/Faust-GGUF) contains GGUF format model files for [AgentPublic/Faust](https://huggingface.co/AgentPublic/Faust). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
{"tags": ["quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "text-generation", "llama", "llama-2", "text-generation"], "model_name": "Faust-GGUF", "base_model": "AgentPublic/Faust", "inference": false, "model_creator": "AgentPublic", "pipeline_tag": "text-generation", "quantized_by": "MaziyarPanahi"}
MaziyarPanahi/Faust-GGUF
null
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "text-generation", "llama", "llama-2", "base_model:AgentPublic/Faust", "text-generation-inference", "region:us" ]
null
2024-04-30T17:40:04+00:00
null
null
{}
minhquy1624/model-education-0105
null
[ "safetensors", "region:us" ]
null
2024-04-30T17:40:56+00:00
null
null
{}
HunterGerlach/merlinite-il-aligned
null
[ "region:us" ]
null
2024-04-30T17:41:09+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
tminh/SeaLLM-7B-v2.5-vi-pubmed-GPTQ
null
[ "transformers", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-30T17:42:04+00:00
text-generation
transformers
<img src="./contrail_logo.svg" width="100%" height="10%" alt=""> # Description # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"language": ["ja", "en"], "license": "apache-2.0", "library_name": "transformers", "datasets": ["sudy-super/JetCopper-10B"]}
sudy-super/contrail-190m-64k
null
[ "transformers", "pytorch", "safetensors", "mistral", "text-generation", "ja", "en", "dataset:sudy-super/JetCopper-10B", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T17:43:09+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pranavGenAI/ESG_RFP_Query
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-30T17:43:09+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # shawgpt-ft This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.6818 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 4.0824 | 0.9231 | 3 | 4.1758 | | 3.6576 | 1.8462 | 6 | 3.6544 | | 3.1555 | 2.7692 | 9 | 3.2104 | | 2.1033 | 4.0 | 13 | 2.9362 | | 2.6127 | 4.9231 | 16 | 2.8504 | | 2.4864 | 5.8462 | 19 | 2.7885 | | 2.3841 | 6.7692 | 22 | 2.7398 | | 1.7399 | 8.0 | 26 | 2.6975 | | 2.2769 | 8.9231 | 29 | 2.6833 | | 1.5 | 9.2308 | 30 | 2.6818 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.1.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "model-index": [{"name": "shawgpt-ft", "results": []}]}
pranavGenAI/shawgpt-ft
null
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "license:apache-2.0", "region:us" ]
null
2024-04-30T17:43:13+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
huiang/model_sentiment140
null
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T17:44:28+00:00
null
null
{"license": "llama2"}
Devlu-ai/app-listprice
null
[ "license:llama2", "region:us" ]
null
2024-04-30T17:45:20+00:00
text-to-audio
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fil_b64_le4_s8000 This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4246 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - training_steps: 8000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:--------:|:----:|:---------------:| | 0.4811 | 22.2222 | 500 | 0.4381 | | 0.4495 | 44.4444 | 1000 | 0.4216 | | 0.4293 | 66.6667 | 1500 | 0.4446 | | 0.4246 | 88.8889 | 2000 | 0.4177 | | 0.4094 | 111.1111 | 2500 | 0.4179 | | 0.3944 | 133.3333 | 3000 | 0.4232 | | 0.3794 | 155.5556 | 3500 | 0.4190 | | 0.3768 | 177.7778 | 4000 | 0.4187 | | 0.3743 | 200.0 | 4500 | 0.4276 | | 0.3598 | 222.2222 | 5000 | 0.4232 | | 0.3634 | 244.4444 | 5500 | 0.4203 | | 0.3558 | 266.6667 | 6000 | 0.4219 | | 0.3502 | 288.8889 | 6500 | 0.4230 | | 0.3529 | 311.1111 | 7000 | 0.4268 | | 0.3447 | 333.3333 | 7500 | 0.4254 | | 0.3371 | 355.5556 | 8000 | 0.4246 | ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/speecht5_tts", "model-index": [{"name": "fil_b64_le4_s8000", "results": []}]}
mikhail-panzo/fil_b64_le4_s8000
null
[ "transformers", "tensorboard", "safetensors", "speecht5", "text-to-audio", "generated_from_trainer", "base_model:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-30T17:45:46+00:00
fill-mask
transformers
# C-BERT MLM ## Exploring Software Naturalness through Neural Language Models ## Overview This model is the unofficial HuggingFace version of "[C-BERT](http://arxiv.org/abs/2302.04026)" with just the masked language modeling head for pretraining. The weights come from "[An Empirical Comparison of Pre-Trained Models of Source Code](http://arxiv.org/abs/2302.04026)". Please cite the authors if you use this in an academic setting.
{"license": "mit", "tags": ["code"], "arxiv": 2302.04026, "pipeline_tag": "fill-mask"}
claudios/cbert
null
[ "transformers", "safetensors", "bert", "fill-mask", "code", "arxiv:2302.04026", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T17:47:57+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pranavGenAI/ESG_RFP_Chat
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-30T17:48:03+00:00
null
null
{}
RusuCesar/Ohboi
null
[ "region:us" ]
null
2024-04-30T17:48:41+00:00
null
transformers
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> static quants of https://huggingface.co/nitky/Megac4ai-command-r-plus <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [PART 1](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-GGUF/resolve/main/Megac4ai-command-r-plus.Q2_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-GGUF/resolve/main/Megac4ai-command-r-plus.Q2_K.gguf.part2of2) | Q2_K | 60.4 | | | [PART 1](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-GGUF/resolve/main/Megac4ai-command-r-plus.IQ3_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-GGUF/resolve/main/Megac4ai-command-r-plus.IQ3_XS.gguf.part2of2) | IQ3_XS | 66.8 | | | [PART 1](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-GGUF/resolve/main/Megac4ai-command-r-plus.Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-GGUF/resolve/main/Megac4ai-command-r-plus.Q3_K_S.gguf.part2of2) | Q3_K_S | 70.3 | | | [PART 1](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-GGUF/resolve/main/Megac4ai-command-r-plus.IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-GGUF/resolve/main/Megac4ai-command-r-plus.IQ3_S.gguf.part2of2) | IQ3_S | 70.5 | beats Q3_K* | | [PART 1](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-GGUF/resolve/main/Megac4ai-command-r-plus.IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-GGUF/resolve/main/Megac4ai-command-r-plus.IQ3_M.gguf.part2of2) | IQ3_M | 73.1 | | | [PART 1](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-GGUF/resolve/main/Megac4ai-command-r-plus.Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-GGUF/resolve/main/Megac4ai-command-r-plus.Q3_K_M.gguf.part2of2) | Q3_K_M | 78.3 | lower quality | | [PART 1](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-GGUF/resolve/main/Megac4ai-command-r-plus.Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-GGUF/resolve/main/Megac4ai-command-r-plus.Q3_K_L.gguf.part2of2) | Q3_K_L | 85.2 | | | [PART 1](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-GGUF/resolve/main/Megac4ai-command-r-plus.IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-GGUF/resolve/main/Megac4ai-command-r-plus.IQ4_XS.gguf.part2of2) | IQ4_XS | 87.2 | | | [PART 1](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-GGUF/resolve/main/Megac4ai-command-r-plus.Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-GGUF/resolve/main/Megac4ai-command-r-plus.Q4_K_S.gguf.part2of2) | Q4_K_S | 91.8 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-GGUF/resolve/main/Megac4ai-command-r-plus.Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-GGUF/resolve/main/Megac4ai-command-r-plus.Q4_K_M.gguf.part2of2) | Q4_K_M | 96.7 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-GGUF/resolve/main/Megac4ai-command-r-plus.Q5_K_S.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-GGUF/resolve/main/Megac4ai-command-r-plus.Q5_K_S.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-GGUF/resolve/main/Megac4ai-command-r-plus.Q5_K_S.gguf.part3of3) | Q5_K_S | 110.8 | | | [PART 1](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-GGUF/resolve/main/Megac4ai-command-r-plus.Q5_K_M.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-GGUF/resolve/main/Megac4ai-command-r-plus.Q5_K_M.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-GGUF/resolve/main/Megac4ai-command-r-plus.Q5_K_M.gguf.part3of3) | Q5_K_M | 113.7 | | | [PART 1](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-GGUF/resolve/main/Megac4ai-command-r-plus.Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-GGUF/resolve/main/Megac4ai-command-r-plus.Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-GGUF/resolve/main/Megac4ai-command-r-plus.Q6_K.gguf.part3of3) | Q6_K | 131.7 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-GGUF/resolve/main/Megac4ai-command-r-plus.Q8_0.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-GGUF/resolve/main/Megac4ai-command-r-plus.Q8_0.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-GGUF/resolve/main/Megac4ai-command-r-plus.Q8_0.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-GGUF/resolve/main/Megac4ai-command-r-plus.Q8_0.gguf.part4of4) | Q8_0 | 170.6 | fast, best quality | | [P1](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-GGUF/resolve/main/Megac4ai-command-r-plus.SOURCE.gguf.part1of7) [P2](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-GGUF/resolve/main/Megac4ai-command-r-plus.SOURCE.gguf.part2of7) [P3](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-GGUF/resolve/main/Megac4ai-command-r-plus.SOURCE.gguf.part3of7) [P4](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-GGUF/resolve/main/Megac4ai-command-r-plus.SOURCE.gguf.part4of7) [P5](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-GGUF/resolve/main/Megac4ai-command-r-plus.SOURCE.gguf.part5of7) [P6](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-GGUF/resolve/main/Megac4ai-command-r-plus.SOURCE.gguf.part6of7) [P7](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-GGUF/resolve/main/Megac4ai-command-r-plus.SOURCE.gguf.part7of7) | SOURCE | 321.0 | source gguf, only provided when it was hard to come by | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "cc-by-nc-4.0", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": "nitky/Megac4ai-command-r-plus", "quantized_by": "mradermacher"}
mradermacher/Megac4ai-command-r-plus-GGUF
null
[ "transformers", "mergekit", "merge", "en", "base_model:nitky/Megac4ai-command-r-plus", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-30T17:49:29+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
abc88767/model22
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T17:50:03+00:00
null
null
{}
srajwal1/flan-t5-base-imdb-text-classification
null
[ "region:us" ]
null
2024-04-30T17:52:02+00:00
image-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-finetuned-eurosat This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7387 - Accuracy: 0.7947 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 2.0852 | 0.9362 | 11 | 1.6028 | 0.4263 | | 1.2089 | 1.9574 | 23 | 1.1012 | 0.6789 | | 0.7539 | 2.9787 | 35 | 0.9159 | 0.7158 | | 0.4935 | 4.0 | 47 | 0.8390 | 0.7368 | | 0.3742 | 4.9362 | 58 | 0.7865 | 0.7632 | | 0.2641 | 5.6170 | 66 | 0.7387 | 0.7947 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.1.0 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "google/vit-base-patch16-224", "model-index": [{"name": "vit-base-patch16-224-finetuned-eurosat", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.7947368421052632, "name": "Accuracy"}]}]}]}
pk3388/vit-base-patch16-224-finetuned-eurosat
null
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T17:52:15+00:00
text2text-generation
transformers
# Slavic T5 Base Aim of this model is to reach the best results for the Slavic laguages with Latin script. It is suitable for tasks such as: - summarization, - extractive question answering, - machine translation between slavic languages in Latin script. The model is trained on the selected parts of OSCAR corpus and MaCoCu corpus. It supports this languages: Czech, Croatian, Polish , Slovak, Slovenian, Vocabulary has 120 000 tokens, contains capital letters.
{"language": ["cs", "cr", "hr", "pl", "sl", "sk"], "datasets": ["oscar", "hieronymusa/MaCoCu-dataset-250k"]}
TUKE-KEMT/slavic-t5-base
null
[ "transformers", "safetensors", "t5", "text2text-generation", "cs", "cr", "hr", "pl", "sl", "sk", "dataset:oscar", "dataset:hieronymusa/MaCoCu-dataset-250k", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T17:52:35+00:00
text-generation
transformers
{}
Weni/WeniGPT-Agents-Llama3-5.0.14-DPO-AWQ
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-30T17:52:36+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
huiang/model_imdb
null
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T17:52:40+00:00
null
null
{"license": "openrail"}
de0nis2011/vladickckck
null
[ "license:openrail", "region:us" ]
null
2024-04-30T17:55:07+00:00
null
null
{}
vabdollahi/code-llama-7b-text-to-sql
null
[ "region:us" ]
null
2024-04-30T17:55:15+00:00
text-generation
transformers
{}
claudios/gptc
null
[ "transformers", "safetensors", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T17:56:15+00:00
feature-extraction
transformers
{}
claudios/mlm_roberta_size
null
[ "transformers", "safetensors", "roberta", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2024-04-30T17:56:16+00:00
feature-extraction
transformers
{}
claudios/sptcode
null
[ "transformers", "safetensors", "bart", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2024-04-30T17:56:18+00:00
null
null
{}
clarm300/hot
null
[ "region:us" ]
null
2024-04-30T17:56:35+00:00
null
null
# Kleiner Astronaut - 126M ![Astronaut holding balloon](cover.png) Model pre-trained on the german [Kleiner Astronaut dataset](https://huggingface.co/datasets/Jotschi/kleiner-astronaut). Example from Dataset: > Eines Tages traten der kleine Astronaut Max und sein neues Roboter-Freundchen, Astro, in ihr schattiges Wohnzimmer… * Architecture: NanoGPT / Custom Arch - Based on GPT-2 * License: Apache-2.0 ## Training Sources: https://github.com/Jotschi/nanoGPT/tree/kleiner-astronaut-126M Hyperparams used to train this model: | Parameter | Value | |----------------|-------| | n_layer | 12 | | n_head | 12 | | n_embd | 780 | | dropout | 0.2 | | batch_size | 32 | | block_size | 256 | | gradient_accumulation_steps | 1 | | learning_rate | 1e-5 | | max_iters | 500000 | | lr_decay_iters | 5000 | | min_lr | 1e-6 | | beta2 | 0.99 | | warmup_iters | 100 | ## Example ``` Die Galaxie war voller ungewöhnlicher Dinge, die man noch nie gesehen hatte – riesige Sterne, die wie kleine Diamanten am Himmel glitzerten, und Sternschnuppen, die wie feine Seifenblasen funkelten. Max und Luna waren so erstaunt, dass sie sich auf einem großen Festmahl herumdrehten, begegneten ihnen über einer Reihe von Galaxien, wo jedes Sternchen einzigartige Geschichten hörte. Als sie zu einem großen Planet umschwommen waren, bemerkten sie einen kleinen Planet, der vor ihnen aufgegangen war. "Wow", murmelte Max, "das ist so großartig!" Er hatte eine kleine Stadt in Form von Sternen darauf und war begeistert von den schönsten Welten. Auf dieser neuen Welt trafen sie auf viele andere kleine Astronauten, die ebenfalls dort lebten. Diese Kinder waren so freundlich wie Max, aber sie trugen mit ihren Hüften und schauten ihnen zu. "Wir sind die Sternengötter", sagte einer der kleinen Sternchen, "und wir sind auf unserer Reise durch den Kosmos." Max lachte und dachte nach, wie er diese neue Welt entdecken würde. Mit einem Sprung in seine kleine Raumkapsel setzte Max los und fuhr durch die Sternenschleife, um nach dem Abenteuer zu suchen. Er kletterte über Flüssigkeiten, die wie flüssige Stücke aussahen und sah wie kleine Sterne auf einer großen Festfeier tanzten. ``` ## Usage The custom model implementation can be found in `model.py`. ```py pip install -r requirements.txt python example.py ``` The model is not compatible with common inference servers.
{"datasets": ["jotschi/kleiner-astronaut"]}
Jotschi/kleiner-astronaut-126M
null
[ "pytorch", "dataset:jotschi/kleiner-astronaut", "region:us" ]
null
2024-04-30T17:57:21+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
OwOOwO/finalupdate2
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T17:57:39+00:00
audio-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilhubert-bass-classifier6 This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the TheDuyx/augmented_bass_sounds dataset. It achieves the following results on the evaluation set: - Loss: 0.0110 - Accuracy: 0.9985 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3079 | 1.0 | 3828 | 0.1400 | 0.9724 | | 0.2019 | 2.0 | 7656 | 0.1161 | 0.9803 | | 0.0002 | 3.0 | 11484 | 0.0765 | 0.9897 | | 0.0002 | 4.0 | 15312 | 0.0347 | 0.9959 | | 0.0 | 5.0 | 19140 | 0.0110 | 0.9985 | ### Framework versions - Transformers 4.39.2 - Pytorch 2.2.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["augmented_bass_sounds"], "metrics": ["accuracy"], "base_model": "ntu-spml/distilhubert", "model-index": [{"name": "distilhubert-bass-classifier6", "results": [{"task": {"type": "audio-classification", "name": "Audio Classification"}, "dataset": {"name": "TheDuyx/augmented_bass_sounds", "type": "augmented_bass_sounds"}, "metrics": [{"type": "accuracy", "value": 0.9985302763080541, "name": "Accuracy"}]}]}]}
TheDuyx/distilhubert-bass-classifier6
null
[ "transformers", "safetensors", "hubert", "audio-classification", "generated_from_trainer", "dataset:augmented_bass_sounds", "base_model:ntu-spml/distilhubert", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2024-04-30T17:57:40+00:00
text-generation
transformers
{}
htrivedi99/kwal-mistral-7b-finetune
null
[ "transformers", "safetensors", "mistral", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T17:57:54+00:00
null
null
{}
SageLiao/llava-1.5-7b-hf-ft-amazon-healthcare
null
[ "region:us" ]
null
2024-04-30T17:58:43+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
codevang/fast_small_15
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-30T18:00:09+00:00
visual-question-answering
transformers
# llama3-vision-alpha projection module trained to add vision capabilties to Llama 3 using SigLIP. built by [@yeswondwerr](https://x.com/yeswondwerr) and [@qtnx_](https://x.com/qtnx_) usable directly in Transformers, **warning: bitsandbytes quantization is currently not supported!** **usage** ``` pip install torch transformers pillow ``` ```python import torch from PIL import Image from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "qresearch/llama-3-vision-alpha-hf" model = AutoModelForCausalLM.from_pretrained( model_id, trust_remote_code=True, torch_dtype=torch.float16 ).to("cuda") tokenizer = AutoTokenizer.from_pretrained( model_id, use_fast=True, ) image = Image.open("image_path") print( tokenizer.decode( model.answer_question(image, "question", tokenizer), skip_special_tokens=True, ) ) ``` **examples** | Image | Examples | | --- | --- | | <img src="assets/demo-1.jpg" width="300"/> | **What is the title of this book? answer briefly**<br>The title of the book is "The Little Book of Deep Learning".<br><br>**Where is the person standing? answer briefly**<br>The person is standing on the balcony.<br><br>**Describe the image**<br>The image shows a person holding a book with a cityscape visible through the window behind them. The book has a cover with a title that reads "The Little Book of Deep Learning" in bold letters. | | <img src="assets/demo-2.jpg" width="300"/> | **What type of food is the girl holding? answer briefly**<br>A hamburger!<br><br>**What color is the woman's hair? answer briefly**<br>It's white!<br><br>**Describe the image**<br>The image is of a young girl with short, curly hair and a sweet smile, holding a giant hamburger in her hand. She's sitting at a table with a festive dinner setting, surrounded by candles and a warm glow. Her eyes are shining with excitement and contentment as she takes a big bite of the burger. | **acknowledgements** - Liu et al. : [LLaVA](https://arxiv.org/abs/2304.08485) - Moon et al. : [AnyMAL](https://arxiv.org/abs/2309.16058) - vikhyatk : moondream, test images ``` .x+=:. z` ^% .uef^" .u . . <k .u . :d88E .u@u .d88B :@8c .u .@8Ned8" .u u .d88B :@8c . `888E .zWF8888bx ="8888f8888r ud8888. .@^%8888" ud8888. us888u. ="8888f8888r .udR88N 888E .z8k .888 9888 4888>'88" :888'8888. x88: `)8b. :888'8888. .@88 "8888" 4888>'88" <888'888k 888E~?888L I888 9888 4888> ' d888 '88%" 8888N=*8888 d888 '88%" 9888 9888 4888> ' 9888 'Y" 888E 888E I888 9888 4888> 8888.+" %8" R88 8888.+" 9888 9888 4888> 9888 888E 888E I888 9888 .d888L .+ 8888L @8Wou 9% 8888L 9888 9888 .d888L .+ 9888 888E 888E `888Nx?888 ^"8888*" '8888c. .+ .888888P` '8888c. .+ 9888 9888 ^"8888*" ?8888u../ 888E 888E "88" '888 "Y" "88888% ` ^"F "88888% "888*""888" "Y" "8888P' m888N= 888> 88E "YP' "YP' ^Y" ^Y' "P' `Y" 888 98> J88" '8 @% ` :" ```
{"language": ["en"], "license": "llama3", "datasets": ["liuhaotian/LLaVA-CC3M-Pretrain-595K"], "pipeline_tag": "visual-question-answering"}
qresearch/llama-3-vision-alpha-hf
null
[ "transformers", "safetensors", "llamavision", "text-generation", "visual-question-answering", "custom_code", "en", "dataset:liuhaotian/LLaVA-CC3M-Pretrain-595K", "arxiv:2304.08485", "arxiv:2309.16058", "license:llama3", "autotrain_compatible", "region:us" ]
null
2024-04-30T18:00:17+00:00
null
peft
## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - _load_in_8bit: False - _load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 - load_in_4bit: True - load_in_8bit: False ### Framework versions - PEFT 0.4.0
{"library_name": "peft"}
moiseserg/llama-2-7b-datyra
null
[ "peft", "region:us" ]
null
2024-04-30T18:01:30+00:00
null
transformers
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/DevsDoCode/Llama-3-8B-Instruct-1048k <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-i1-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-i1-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-i1-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-i1-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-i1-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-i1-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-i1-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-i1-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-i1-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-i1-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-i1-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-i1-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-i1-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-i1-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-i1-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-i1-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-i1-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-i1-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-i1-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-i1-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-i1-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "library_name": "transformers", "base_model": "DevsDoCode/Llama-3-8B-Instruct-1048k", "quantized_by": "mradermacher"}
mradermacher/Llama-3-8B-Instruct-1048k-i1-GGUF
null
[ "transformers", "gguf", "en", "base_model:DevsDoCode/Llama-3-8B-Instruct-1048k", "endpoints_compatible", "region:us" ]
null
2024-04-30T18:01:57+00:00
reinforcement-learning
sample-factory
A(n) **APPO** model trained on the **GDY-PowerGrid** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r metta-ai/baseline.v0.1.1 ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m <path.to.enjoy.module> --algo=APPO --env=GDY-PowerGrid --train_dir=./train_dir --experiment=baseline.v0.1.1 ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m <path.to.train.module> --algo=APPO --env=GDY-PowerGrid --train_dir=./train_dir --experiment=baseline.v0.1.1 --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
{"library_name": "sample-factory", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "sample-factory"]}
metta-ai/baseline.v0.1.1
null
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "region:us" ]
null
2024-04-30T18:02:31+00:00
null
null
{}
harveykw/harveymodel
null
[ "region:us" ]
null
2024-04-30T18:02:39+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Nyxmmz/lora
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-30T18:04:51+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["llama-factory"]}
justinj92/phi3-orpo
null
[ "transformers", "safetensors", "phi3", "text-generation", "llama-factory", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us", "has_space" ]
null
2024-04-30T18:05:10+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
gytdau/mazegpt
null
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T18:05:20+00:00
text-generation
mlx
# mlx-community/Meta-Llama-3-70B-Instruct-mlx-unquantized This model was converted to MLX format from [`meta-llama/Meta-Llama-3-70B-Instruct`]() using mlx-lm version **0.12.1**. Refer to the [original model card](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/Meta-Llama-3-70B-Instruct-mlx-unquantized") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
{"language": ["en"], "license": "other", "tags": ["facebook", "meta", "pytorch", "llama", "llama-3", "mlx"], "pipeline_tag": "text-generation", "license_name": "llama3", "license_link": "LICENSE", "extra_gated_prompt": "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.\n\"Documentation\" means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\"Licensee\" or \"you\" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity\u2019s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama 3\" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means, collectively, Meta\u2019s proprietary Meta Llama 3 and Documentation (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta\u2019s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display \u201cBuilt with Meta Llama 3\u201d on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include \u201cLlama 3\u201d at the beginning of any such AI model name.\nii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you.\niii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a \u201cNotice\u201d text file distributed as a part of such copies: \u201cMeta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright \u00a9 Meta Platforms, Inc. All Rights Reserved.\u201d\niv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference into this Agreement.\nv. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee\u2019s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN \u201cAS IS\u201d BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use \u201cLlama 3\u201d (the \u201cMark\u201d) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta\u2019s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to Meta\u2019s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.\nc. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.\n6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use Policy\nMeta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable Use Policy (\u201cPolicy\u201d). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n#### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate the law or others\u2019 rights, including to:\n 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:\n 1. Violence or terrorism\n 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material\n 3. Human trafficking, exploitation, and sexual violence\n 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.\n 5. Sexual solicitation\n 6. Any other criminal activity\n 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals\n 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services\n 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices\n 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws\n 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials\n 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system\n2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following:\n 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State\n 2. Guns and illegal weapons (including weapon development)\n 3. Illegal drugs and regulated/controlled substances\n 4. Operation of critical infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm or harm to others, including suicide, cutting, and eating disorders\n 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content\n 3. Generating, promoting, or further distributing spam\n 4. Impersonating another individual without consent, authorization, or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are human-generated\n 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement\n4. Fail to appropriately disclose to end users any known dangers of your AI system\nPlease report any violation of this Policy, software \u201cbug,\u201d or other problems that could lead to a violation of this Policy through one of the following means:\n * Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]", "extra_gated_fields": {"First Name": "text", "Last Name": "text", "Date of birth": "date_picker", "Country": "country", "Affiliation": "text", "geo": "ip_location", "By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy": "checkbox"}, "extra_gated_description": "The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).", "extra_gated_button_content": "Submit", "widget": [{"example_title": "Winter holidays", "messages": [{"role": "system", "content": "You are a helpful and honest assistant. Please, respond concisely and truthfully."}, {"role": "user", "content": "Can you recommend a good destination for Winter holidays?"}]}, {"example_title": "Programming assistant", "messages": [{"role": "system", "content": "You are a helpful and honest code and programming assistant. Please, respond concisely and truthfully."}, {"role": "user", "content": "Write a function that computes the nth fibonacci number."}]}], "inference": {"parameters": {"max_new_tokens": 300, "stop": ["<|end_of_text|>", "<|eot_id|>"]}}}
mlx-community/Meta-Llama-3-70B-Instruct-mlx-unquantized
null
[ "mlx", "safetensors", "llama", "facebook", "meta", "pytorch", "llama-3", "text-generation", "conversational", "en", "license:other", "region:us" ]
null
2024-04-30T18:05:50+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"language": ["my"], "library_name": "transformers", "pipeline_tag": "text-generation"}
Turtle344/mistral-health-faq-v1
null
[ "transformers", "safetensors", "mistral", "text-generation", "my", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T18:06:36+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # uae-climate-multi-classifier-weighted This model is a fine-tuned version of [alex-miller/ODABert](https://huggingface.co/alex-miller/ODABert) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0936 - Accuracy: 0.9197 - F1: 0.7059 - Precision: 0.7 - Recall: 0.7119 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 1.2281 | 1.0 | 246 | 1.4142 | 0.8670 | 0.0333 | 1.0 | 0.0169 | | 1.1785 | 2.0 | 492 | 1.3474 | 0.8830 | 0.3014 | 0.7857 | 0.1864 | | 1.0939 | 3.0 | 738 | 1.1641 | 0.8784 | 0.4421 | 0.5833 | 0.3559 | | 0.9503 | 4.0 | 984 | 1.1304 | 0.9083 | 0.5455 | 0.8276 | 0.4068 | | 0.8772 | 5.0 | 1230 | 1.1369 | 0.9197 | 0.6154 | 0.875 | 0.4746 | | 1.0567 | 6.0 | 1476 | 0.9456 | 0.9151 | 0.6667 | 0.7115 | 0.6271 | | 0.9089 | 7.0 | 1722 | 0.9752 | 0.9174 | 0.6727 | 0.7255 | 0.6271 | | 0.9149 | 8.0 | 1968 | 1.1084 | 0.9197 | 0.6667 | 0.7609 | 0.5932 | | 0.8285 | 9.0 | 2214 | 0.8487 | 0.9037 | 0.6818 | 0.6164 | 0.7627 | | 0.8322 | 10.0 | 2460 | 1.1982 | 0.9174 | 0.6471 | 0.7674 | 0.5593 | | 0.7955 | 11.0 | 2706 | 1.0507 | 0.9174 | 0.6667 | 0.7347 | 0.6102 | | 0.7942 | 12.0 | 2952 | 0.9965 | 0.9197 | 0.7059 | 0.7 | 0.7119 | | 0.7152 | 13.0 | 3198 | 1.0222 | 0.9197 | 0.7009 | 0.7069 | 0.6949 | | 0.7015 | 14.0 | 3444 | 0.9579 | 0.9197 | 0.7244 | 0.6765 | 0.7797 | | 0.5537 | 15.0 | 3690 | 1.0659 | 0.9220 | 0.7069 | 0.7193 | 0.6949 | | 0.6379 | 16.0 | 3936 | 1.0921 | 0.9197 | 0.6957 | 0.7143 | 0.6780 | | 0.5977 | 17.0 | 4182 | 1.0491 | 0.9220 | 0.7213 | 0.6984 | 0.7458 | | 0.6792 | 18.0 | 4428 | 1.0872 | 0.9197 | 0.7059 | 0.7 | 0.7119 | | 0.5509 | 19.0 | 4674 | 1.0763 | 0.9220 | 0.7167 | 0.7049 | 0.7288 | | 0.579 | 20.0 | 4920 | 1.0936 | 0.9197 | 0.7059 | 0.7 | 0.7119 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "base_model": "alex-miller/ODABert", "model-index": [{"name": "uae-climate-multi-classifier-weighted", "results": []}]}
alex-miller/uae-climate-multi-classifier-weighted
null
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:alex-miller/ODABert", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T18:06:49+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/mizoru/ORD/runs/t813gl5z) # Whisper Small Ru ORD 0.7 PEFT LoRA - Mizoru This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the ORD_0.7 dataset. It achieves the following results on the evaluation set: - eval_loss: 1.3021 - eval_wer: 60.8525 - eval_cer: 33.9271 - eval_clean_wer: 53.2332 - eval_clean_cer: 28.0130 - eval_runtime: 1732.3102 - eval_samples_per_second: 0.451 - eval_steps_per_second: 0.028 - epoch: 7.0 - step: 1372 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - training_steps: 2000 - mixed_precision_training: Native AMP ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.41.0.dev0 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.19.1
{"language": ["ru"], "license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "openai/whisper-small", "model-index": [{"name": "Whisper Small Ru ORD 0.7 PEFT LoRA - Mizoru ", "results": []}]}
mizoru/whisper-small-ru-ORD_0.7_peft_0.3
null
[ "peft", "safetensors", "generated_from_trainer", "ru", "base_model:openai/whisper-small", "license:apache-2.0", "region:us" ]
null
2024-04-30T18:06:59+00:00
text-generation
transformers
{}
itay-nakash/model_5706f6d2bd
null
[ "transformers", "mistral", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T18:08:53+00:00
text-generation
transformers
# GreenBit LLMs This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance. Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
{"license": "apache-2.0"}
GreenBitAI/Llama-3-70B-instruct-layer-mix-bpw-4.0
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T18:08:55+00:00
text2text-generation
transformers
{}
claudios/t5-learning-mt-task-balanced
null
[ "transformers", "safetensors", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T18:09:36+00:00
token-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
matthewleechen/test_ner_patentspecs
null
[ "transformers", "safetensors", "bert", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T18:09:45+00:00
text2text-generation
transformers
{}
claudios/t5-learning-mt-task-unbalanced
null
[ "transformers", "safetensors", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T18:11:12+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
SeemalT/gemma2b-finetuned
null
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T18:11:36+00:00
null
null
{}
iyubondyrev/phi_3_mini_quantized
null
[ "gguf", "region:us" ]
null
2024-04-30T18:11:47+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eurus-7b-cost-UC-5e-7 This model is a fine-tuned version of [openbmb/Eurus-7b-sft](https://huggingface.co/openbmb/Eurus-7b-sft) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6920 - Rewards/chosen: -0.5152 - Rewards/rejected: -0.5522 - Rewards/accuracies: 0.5560 - Rewards/margins: 0.0369 - Rewards/margins Max: 0.5671 - Rewards/margins Min: -0.4943 - Rewards/margins Std: 0.3483 - Logps/rejected: -312.9047 - Logps/chosen: -326.6244 - Logits/rejected: -2.1863 - Logits/chosen: -2.3047 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Rewards/margins Max | Rewards/margins Min | Rewards/margins Std | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:-------------------:|:-------------------:|:-------------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.3977 | 1.0 | 968 | 0.6920 | -0.5152 | -0.5522 | 0.5560 | 0.0369 | 0.5671 | -0.4943 | 0.3483 | -312.9047 | -326.6244 | -2.1863 | -2.3047 | ### Framework versions - PEFT 0.7.1 - Transformers 4.39.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "openbmb/Eurus-7b-sft", "model-index": [{"name": "eurus-7b-cost-UC-5e-7", "results": []}]}
just1nseo/eurus-7b-cost-UC-5e-7
null
[ "peft", "safetensors", "trl", "dpo", "generated_from_trainer", "base_model:openbmb/Eurus-7b-sft", "license:apache-2.0", "region:us" ]
null
2024-04-30T18:11:52+00:00
text2text-generation
transformers
{}
claudios/t5-learning-no-pretraining-ag-task
null
[ "transformers", "safetensors", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T18:12:47+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
hyojin99/whisper-medium
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-30T18:13:12+00:00
text2text-generation
transformers
{}
claudios/t5-learning-no-pretraining-bf-task
null
[ "transformers", "safetensors", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T18:14:20+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-70m_mz-135_WordLength_n-its-10-seed-3 This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 3 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-70m", "model-index": [{"name": "robust_llm_pythia-70m_mz-135_WordLength_n-its-10-seed-3", "results": []}]}
AlignmentResearch/robust_llm_pythia-70m_mz-135_WordLength_n-its-10-seed-3
null
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-70m", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T18:14:31+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-70m_mz-135_WordLength_n-its-10-seed-2 This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-70m", "model-index": [{"name": "robust_llm_pythia-70m_mz-135_WordLength_n-its-10-seed-2", "results": []}]}
AlignmentResearch/robust_llm_pythia-70m_mz-135_WordLength_n-its-10-seed-2
null
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-70m", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T18:14:34+00:00
null
null
{"license": "apache-2.0"}
mselek/Mrt
null
[ "license:apache-2.0", "region:us" ]
null
2024-04-30T18:14:50+00:00
token-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
nlpproject/NER_distilBERT
null
[ "transformers", "safetensors", "distilbert", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T18:15:47+00:00
text2text-generation
transformers
{}
claudios/t5-learning-no-pretraining-cs-task
null
[ "transformers", "safetensors", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T18:15:54+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # adapter_ThaiSC_LLM_Scamper This model is a fine-tuned version of [openthaigpt/openthaigpt-1.0.0-7b-chat](https://huggingface.co/openthaigpt/openthaigpt-1.0.0-7b-chat) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-05 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.38.0 - Pytorch 2.2.1+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "openthaigpt/openthaigpt-1.0.0-7b-chat", "model-index": [{"name": "adapter_ThaiSC_LLM_Scamper", "results": []}]}
SuperkingbasSKB/adapter_ThaiSC_LLM_Scamper
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:openthaigpt/openthaigpt-1.0.0-7b-chat", "license:apache-2.0", "region:us" ]
null
2024-04-30T18:16:35+00:00
text2text-generation
transformers
{}
claudios/t5-learning-no-pretraining-mg-task
null
[ "transformers", "safetensors", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T18:17:25+00:00
text-classification
transformers
{}
PavanDeepak/text-classification-model-iab-categories-revised-bert-base-uncased-final
null
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T18:18:03+00:00
null
null
{}
Arbiter586/Arbiter586
null
[ "region:us" ]
null
2024-04-30T18:18:04+00:00
null
null
{}
esarnb/penguino-js
null
[ "region:us" ]
null
2024-04-30T18:18:28+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eurus-7b-cost-UI-UC-5e-7 This model is a fine-tuned version of [openbmb/Eurus-7b-sft](https://huggingface.co/openbmb/Eurus-7b-sft) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7012 - Rewards/chosen: -0.8455 - Rewards/rejected: -0.9219 - Rewards/accuracies: 0.5507 - Rewards/margins: 0.0764 - Rewards/margins Max: 0.9500 - Rewards/margins Min: -0.7111 - Rewards/margins Std: 0.5453 - Logps/rejected: -358.1681 - Logps/chosen: -363.9157 - Logits/rejected: -2.1126 - Logits/chosen: -2.2173 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Rewards/margins Max | Rewards/margins Min | Rewards/margins Std | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:-------------------:|:-------------------:|:-------------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.296 | 1.0 | 2616 | 0.7012 | -0.8455 | -0.9219 | 0.5507 | 0.0764 | 0.9500 | -0.7111 | 0.5453 | -358.1681 | -363.9157 | -2.1126 | -2.2173 | ### Framework versions - PEFT 0.7.1 - Transformers 4.39.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "openbmb/Eurus-7b-sft", "model-index": [{"name": "eurus-7b-cost-UI-UC-5e-7", "results": []}]}
just1nseo/eurus-7b-cost-UI-UC-5e-7
null
[ "peft", "safetensors", "mistral", "trl", "dpo", "generated_from_trainer", "base_model:openbmb/Eurus-7b-sft", "license:apache-2.0", "4-bit", "region:us" ]
null
2024-04-30T18:18:30+00:00
null
null
{}
medric49/dpo-pythia-410m
null
[ "region:us" ]
null
2024-04-30T18:18:31+00:00
text2text-generation
transformers
{}
claudios/t5-learning-with-pretraining-ag-task
null
[ "transformers", "safetensors", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T18:18:57+00:00
null
transformers
# Uploaded model - **Developed by:** zeeshanali01 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/Phi-3-mini-4k-instruct-bnb-4bit"}
zeeshanali01/lora_model
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-30T18:20:08+00:00
feature-extraction
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
claudios/unixcoder-base-nine
null
[ "transformers", "safetensors", "roberta", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-30T18:20:26+00:00
text2text-generation
transformers
{}
claudios/t5-learning-with-pretraining-bf-task
null
[ "transformers", "safetensors", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T18:20:32+00:00
null
null
# Model: Grain Classification Model ### Description: This model is design to classify 4 different types of grains: noodles, rice, couscous, and oatmeal by using the FastAI library with Fastbook and ResNet-18 convolutional neural network architecture. <br> The model files can be found in the "Files and Versions" section. ### Training Data - Limited dataset of 400 labeled images of grains, with 100 images for each category. - Obtained using DuckDuckGo Images API - Resized to 128x128 pixels to reduce storage and computation usage. - 3 additional training epochs are performed to fine-tune the model for grain classification task. ### Metrics Model performance are evaluated using confusion matrix. Confusion matrix measures the metric of accuracy and precision. The prediction is in 4 categories: - True positive: Model correctly predicted positive class. - False Positive: Model incorrectly predicted when actual is negative class. - True Negative: Model correctly predict negative class. - False Negative: Model incorrectly predict negative class when actual is positive. ### Results In the images below, you can see that there are different shades. - The darker shade represents correct predictions - The lighter shade represents incorrect predictions. <table> <tr> <td>First training:</td> <td>Second training:</td> </tr> <tr> <td><img src="1GrainFirstTraining.png" alt="First training" width="300"/></td> <td><img src="2GrainRetrain.png" alt="Second training" width="300"/></td> </tr> </table> We can see that the model has improved after cleaning the data. - it improved in predicting the images correctly. - there is better accuracy in the second training compared to the first training. The number of darker shades is higher. - there is less confusion in the second training compared to the first training. The number of lighter shades is lower.
{}
jqjacq/Grain-Detection
null
[ "region:us" ]
null
2024-04-30T18:20:49+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-31m_mz-135_WordLength_n-its-10-seed-3 This model is a fine-tuned version of [EleutherAI/pythia-31m](https://huggingface.co/EleutherAI/pythia-31m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 3 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-31m", "model-index": [{"name": "robust_llm_pythia-31m_mz-135_WordLength_n-its-10-seed-3", "results": []}]}
AlignmentResearch/robust_llm_pythia-31m_mz-135_WordLength_n-its-10-seed-3
null
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-31m", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T18:21:09+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-31m_mz-135_WordLength_n-its-10-seed-4 This model is a fine-tuned version of [EleutherAI/pythia-31m](https://huggingface.co/EleutherAI/pythia-31m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-31m", "model-index": [{"name": "robust_llm_pythia-31m_mz-135_WordLength_n-its-10-seed-4", "results": []}]}
AlignmentResearch/robust_llm_pythia-31m_mz-135_WordLength_n-its-10-seed-4
null
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-31m", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T18:21:58+00:00
text2text-generation
transformers
{}
claudios/t5-learning-with-pretraining-cs-task
null
[ "transformers", "safetensors", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T18:22:11+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pvyas96/mistral_7b_med
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-30T18:22:50+00:00
text2text-generation
transformers
{}
claudios/t5-learning-with-pretraining-mg-task
null
[ "transformers", "safetensors", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T18:23:53+00:00
text-generation
transformers
This model is a proof of concept where regular LLM are finetuned to anonimise data. Using a systemprompt: ```` Anonymize : ```` and a user prompt the LLM will try to replace sensitive data by the character "^" # Targets for the anonimisation The model should be able to recognise: * Addresses * emails * phone numbers * Religious events * Gendre * IBAN numbers * Social security numbers * Phone numbers * Bban numbers * swift numbers * a small size of company * credit cards * passports * name # Example #### Input My name is Mike Smith I live in Texas in the Street Rosenberg, Texas(TX), 77471. You can contact me over at [email protected] or via phone +13548783461. #### Expected output My name is ^^^^ ^^^^^ I live in Texas in the Street ^^^^^^^^^, ^^^^^^^^^, ^^^^^. You can contact me over at ^^^^@^^^^^.com or via phone +^^^^^^^^^^^.
{}
BalGu/mistral-7b-anonimiser
null
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T18:23:55+00:00
null
transformers
# Uploaded model - **Developed by:** traderpedroso - **License:** apache-2.0 - **Finetuned from model :** artificialguybr/llama3-8b-alpacadata-ptbr This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "artificialguybr/llama3-8b-alpacadata-ptbr"}
traderpedroso/llama3-8b-lora
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:artificialguybr/llama3-8b-alpacadata-ptbr", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-30T18:24:51+00:00
multiple-choice
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Amrit05/swagdata_bert_finetuned This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3487 - Validation Loss: 0.5780 - Train Accuracy: 0.7975 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 9192, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.8216 | 0.5667 | 0.7833 | 0 | | 0.3487 | 0.5780 | 0.7975 | 1 | ### Framework versions - Transformers 4.39.3 - TensorFlow 2.15.0 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "google-bert/bert-base-uncased", "model-index": [{"name": "Amrit05/swagdata_bert_finetuned", "results": []}]}
Amrit05/swagdata_bert_finetuned
null
[ "transformers", "tf", "bert", "multiple-choice", "generated_from_keras_callback", "base_model:google-bert/bert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-30T18:25:08+00:00
text-generation
transformers
# Uploaded model - **Developed by:** traderpedroso - **License:** apache-2.0 - **Finetuned from model :** artificialguybr/llama3-8b-alpacadata-ptbr This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft"], "base_model": "artificialguybr/llama3-8b-alpacadata-ptbr"}
traderpedroso/llama3
null
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:artificialguybr/llama3-8b-alpacadata-ptbr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T18:25:09+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/emillykkejensen/LLM-instruct/runs/do7hs1j9) # Phi-3-mini-4k-instruct-dansk This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on the [kobprof/skolegpt-instruct](https://huggingface.co/datasets/kobprof/skolegpt-instruct) dataset. It achieves the following results on the evaluation set: - Loss: 1.5493 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - total_train_batch_size: 16 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 2.2.0 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["trl", "sft", "generated_from_trainer", "danish"], "datasets": ["kobprof/skolegpt-instruct"], "base_model": "microsoft/Phi-3-mini-4k-instruct", "model-index": [{"name": "Phi-3-mini-4k-instruct-dansk", "results": []}]}
emillykkejensen/Phi-3-mini-4k-instruct-dansk
null
[ "transformers", "safetensors", "phi3", "text-generation", "trl", "sft", "generated_from_trainer", "danish", "conversational", "custom_code", "dataset:kobprof/skolegpt-instruct", "base_model:microsoft/Phi-3-mini-4k-instruct", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T18:25:14+00:00
null
null
{}
RuralPsychiatry/RP
null
[ "region:us" ]
null
2024-04-30T18:25:42+00:00
feature-extraction
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
claudios/unixcoder-base
null
[ "transformers", "safetensors", "roberta", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-30T18:28:11+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper This model is a fine-tuned version of [openai/whisper-tiny.en](https://huggingface.co/openai/whisper-tiny.en) on the tiny dataset. It achieves the following results on the evaluation set: - Loss: 3.9871 - Wer: 83.6108 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:-------:| | 3.965 | 0.1408 | 10 | 3.9871 | 83.6108 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.1.dev0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["wer"], "base_model": "openai/whisper-tiny.en", "model-index": [{"name": "whisper", "results": []}]}
khaingsmon/whisper
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-tiny.en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-30T18:28:18+00:00
null
null
{}
DjeDjeB/m01
null
[ "region:us" ]
null
2024-04-30T18:28:34+00:00
null
null
{}
hyunn88/rukabyaisings
null
[ "region:us" ]
null
2024-04-30T18:28:40+00:00
null
null
{}
DjeDjeB/m02
null
[ "region:us" ]
null
2024-04-30T18:28:46+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2903 - Accuracy: 0.88 - F1: 0.8808 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.30.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.13.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuning-sentiment-model-3000-samples", "results": []}]}
abrario/finetuning-sentiment-model-3000-samples
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T18:29:50+00:00
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # segformer-b0-scene-parse-150 This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the scene_parse_150 dataset. It achieves the following results on the evaluation set: - Loss: 1.8839 - Mean Iou: 0.0900 - Mean Accuracy: 0.1366 - Overall Accuracy: 0.3644 - Accuracy Wall: nan - Accuracy Building: nan - Accuracy Sky: nan - Accuracy Floor: 0.8407 - Accuracy Tree: nan - Accuracy Ceiling: 0.5637 - Accuracy Road: nan - Accuracy Bed : nan - Accuracy Windowpane: 0.4489 - Accuracy Grass: nan - Accuracy Cabinet: 0.0771 - Accuracy Sidewalk: nan - Accuracy Person: nan - Accuracy Earth: nan - Accuracy Door: 0.3647 - Accuracy Table: 0.0 - Accuracy Mountain: nan - Accuracy Plant: 0.0 - Accuracy Curtain: 0.0604 - Accuracy Chair: 0.0 - Accuracy Car: nan - Accuracy Water: nan - Accuracy Painting: 0.0 - Accuracy Sofa: nan - Accuracy Shelf: nan - Accuracy House: nan - Accuracy Sea: nan - Accuracy Mirror: 0.2133 - Accuracy Rug: 0.0 - Accuracy Field: nan - Accuracy Armchair: nan - Accuracy Seat: nan - Accuracy Fence: nan - Accuracy Desk: nan - Accuracy Rock: nan - Accuracy Wardrobe: nan - Accuracy Lamp: 0.0 - Accuracy Bathtub: 0.3360 - Accuracy Railing: nan - Accuracy Cushion: nan - Accuracy Base: nan - Accuracy Box: 0.0 - Accuracy Column: nan - Accuracy Signboard: nan - Accuracy Chest of drawers: nan - Accuracy Counter: nan - Accuracy Sand: nan - Accuracy Sink: 0.2925 - Accuracy Skyscraper: nan - Accuracy Fireplace: nan - Accuracy Refrigerator: nan - Accuracy Grandstand: nan - Accuracy Path: nan - Accuracy Stairs: nan - Accuracy Runway: nan - Accuracy Case: nan - Accuracy Pool table: nan - Accuracy Pillow: nan - Accuracy Screen door: nan - Accuracy Stairway: nan - Accuracy River: nan - Accuracy Bridge: nan - Accuracy Bookcase: nan - Accuracy Blind: nan - Accuracy Coffee table: nan - Accuracy Toilet: 0.3142 - Accuracy Flower: 0.0 - Accuracy Book: nan - Accuracy Hill: nan - Accuracy Bench: nan - Accuracy Countertop: 0.0 - Accuracy Stove: nan - Accuracy Palm: nan - Accuracy Kitchen island: nan - Accuracy Computer: nan - Accuracy Swivel chair: nan - Accuracy Boat: nan - Accuracy Bar: nan - Accuracy Arcade machine: nan - Accuracy Hovel: nan - Accuracy Bus: nan - Accuracy Towel: 0.1774 - Accuracy Light: 0.0 - Accuracy Truck: nan - Accuracy Tower: nan - Accuracy Chandelier: nan - Accuracy Awning: nan - Accuracy Streetlight: nan - Accuracy Booth: nan - Accuracy Television receiver: nan - Accuracy Airplane: nan - Accuracy Dirt track: nan - Accuracy Apparel: nan - Accuracy Pole: nan - Accuracy Land: nan - Accuracy Bannister: nan - Accuracy Escalator: nan - Accuracy Ottoman: nan - Accuracy Bottle: nan - Accuracy Buffet: nan - Accuracy Poster: nan - Accuracy Stage: nan - Accuracy Van: nan - Accuracy Ship: nan - Accuracy Fountain: nan - Accuracy Conveyer belt: nan - Accuracy Canopy: nan - Accuracy Washer: nan - Accuracy Plaything: nan - Accuracy Swimming pool: nan - Accuracy Stool: nan - Accuracy Barrel: nan - Accuracy Basket: 0.0 - Accuracy Waterfall: nan - Accuracy Tent: nan - Accuracy Bag: 0.0 - Accuracy Minibike: nan - Accuracy Cradle: nan - Accuracy Oven: nan - Accuracy Ball: nan - Accuracy Food: nan - Accuracy Step: nan - Accuracy Tank: nan - Accuracy Trade name: nan - Accuracy Microwave: nan - Accuracy Pot: nan - Accuracy Animal: nan - Accuracy Bicycle: nan - Accuracy Lake: nan - Accuracy Dishwasher: nan - Accuracy Screen: nan - Accuracy Blanket: nan - Accuracy Sculpture: nan - Accuracy Hood: nan - Accuracy Sconce: 0.0 - Accuracy Vase: 0.0 - Accuracy Traffic light: nan - Accuracy Tray: nan - Accuracy Ashcan: nan - Accuracy Fan: nan - Accuracy Pier: nan - Accuracy Crt screen: nan - Accuracy Plate: nan - Accuracy Monitor: nan - Accuracy Bulletin board: nan - Accuracy Shower: 0.0 - Accuracy Radiator: nan - Accuracy Glass: 0.0 - Accuracy Clock: nan - Accuracy Flag: nan - Iou Wall: 0.0 - Iou Building: nan - Iou Sky: nan - Iou Floor: 0.7245 - Iou Tree: nan - Iou Ceiling: 0.5304 - Iou Road: nan - Iou Bed : nan - Iou Windowpane: 0.3857 - Iou Grass: nan - Iou Cabinet: 0.0317 - Iou Sidewalk: nan - Iou Person: 0.0 - Iou Earth: nan - Iou Door: 0.2767 - Iou Table: 0.0 - Iou Mountain: nan - Iou Plant: 0.0 - Iou Curtain: 0.0558 - Iou Chair: 0.0 - Iou Car: nan - Iou Water: nan - Iou Painting: 0.0 - Iou Sofa: nan - Iou Shelf: nan - Iou House: nan - Iou Sea: nan - Iou Mirror: 0.0824 - Iou Rug: 0.0 - Iou Field: nan - Iou Armchair: nan - Iou Seat: nan - Iou Fence: nan - Iou Desk: nan - Iou Rock: nan - Iou Wardrobe: nan - Iou Lamp: 0.0 - Iou Bathtub: 0.2056 - Iou Railing: nan - Iou Cushion: nan - Iou Base: nan - Iou Box: 0.0 - Iou Column: nan - Iou Signboard: nan - Iou Chest of drawers: nan - Iou Counter: nan - Iou Sand: nan - Iou Sink: 0.1332 - Iou Skyscraper: nan - Iou Fireplace: nan - Iou Refrigerator: nan - Iou Grandstand: nan - Iou Path: nan - Iou Stairs: nan - Iou Runway: nan - Iou Case: nan - Iou Pool table: nan - Iou Pillow: nan - Iou Screen door: 0.0 - Iou Stairway: nan - Iou River: nan - Iou Bridge: nan - Iou Bookcase: nan - Iou Blind: nan - Iou Coffee table: nan - Iou Toilet: 0.1932 - Iou Flower: 0.0 - Iou Book: nan - Iou Hill: nan - Iou Bench: nan - Iou Countertop: 0.0 - Iou Stove: nan - Iou Palm: nan - Iou Kitchen island: nan - Iou Computer: nan - Iou Swivel chair: nan - Iou Boat: nan - Iou Bar: nan - Iou Arcade machine: nan - Iou Hovel: nan - Iou Bus: nan - Iou Towel: 0.0818 - Iou Light: 0.0 - Iou Truck: nan - Iou Tower: nan - Iou Chandelier: nan - Iou Awning: nan - Iou Streetlight: nan - Iou Booth: nan - Iou Television receiver: nan - Iou Airplane: nan - Iou Dirt track: nan - Iou Apparel: nan - Iou Pole: nan - Iou Land: nan - Iou Bannister: nan - Iou Escalator: nan - Iou Ottoman: nan - Iou Bottle: nan - Iou Buffet: nan - Iou Poster: nan - Iou Stage: nan - Iou Van: nan - Iou Ship: nan - Iou Fountain: nan - Iou Conveyer belt: nan - Iou Canopy: nan - Iou Washer: nan - Iou Plaything: nan - Iou Swimming pool: nan - Iou Stool: nan - Iou Barrel: nan - Iou Basket: 0.0 - Iou Waterfall: nan - Iou Tent: nan - Iou Bag: 0.0 - Iou Minibike: nan - Iou Cradle: nan - Iou Oven: nan - Iou Ball: nan - Iou Food: nan - Iou Step: nan - Iou Tank: nan - Iou Trade name: nan - Iou Microwave: nan - Iou Pot: nan - Iou Animal: nan - Iou Bicycle: nan - Iou Lake: nan - Iou Dishwasher: nan - Iou Screen: nan - Iou Blanket: nan - Iou Sculpture: nan - Iou Hood: nan - Iou Sconce: 0.0 - Iou Vase: 0.0 - Iou Traffic light: nan - Iou Tray: nan - Iou Ashcan: nan - Iou Fan: nan - Iou Pier: nan - Iou Crt screen: nan - Iou Plate: nan - Iou Monitor: nan - Iou Bulletin board: nan - Iou Shower: 0.0 - Iou Radiator: nan - Iou Glass: 0.0 - Iou Clock: nan - Iou Flag: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Wall | Accuracy Building | Accuracy Sky | Accuracy Floor | Accuracy Tree | Accuracy Ceiling | Accuracy Road | Accuracy Bed | Accuracy Windowpane | Accuracy Grass | Accuracy Cabinet | Accuracy Sidewalk | Accuracy Person | Accuracy Earth | Accuracy Door | Accuracy Table | Accuracy Mountain | Accuracy Plant | Accuracy Curtain | Accuracy Chair | Accuracy Car | Accuracy Water | Accuracy Painting | Accuracy Sofa | Accuracy Shelf | Accuracy House | Accuracy Sea | Accuracy Mirror | Accuracy Rug | Accuracy Field | Accuracy Armchair | Accuracy Seat | Accuracy Fence | Accuracy Desk | Accuracy Rock | Accuracy Wardrobe | Accuracy Lamp | Accuracy Bathtub | Accuracy Railing | Accuracy Cushion | Accuracy Base | Accuracy Box | Accuracy Column | Accuracy Signboard | Accuracy Chest of drawers | Accuracy Counter | Accuracy Sand | Accuracy Sink | Accuracy Skyscraper | Accuracy Fireplace | Accuracy Refrigerator | Accuracy Grandstand | Accuracy Path | Accuracy Stairs | Accuracy Runway | Accuracy Case | Accuracy Pool table | Accuracy Pillow | Accuracy Screen door | Accuracy Stairway | Accuracy River | Accuracy Bridge | Accuracy Bookcase | Accuracy Blind | Accuracy Coffee table | Accuracy Toilet | Accuracy Flower | Accuracy Book | Accuracy Hill | Accuracy Bench | Accuracy Countertop | Accuracy Stove | Accuracy Palm | Accuracy Kitchen island | Accuracy Computer | Accuracy Swivel chair | Accuracy Boat | Accuracy Bar | Accuracy Arcade machine | Accuracy Hovel | Accuracy Bus | Accuracy Towel | Accuracy Light | Accuracy Truck | Accuracy Tower | Accuracy Chandelier | Accuracy Awning | Accuracy Streetlight | Accuracy Booth | Accuracy Television receiver | Accuracy Airplane | Accuracy Dirt track | Accuracy Apparel | Accuracy Pole | Accuracy Land | Accuracy Bannister | Accuracy Escalator | Accuracy Ottoman | Accuracy Bottle | Accuracy Buffet | Accuracy Poster | Accuracy Stage | Accuracy Van | Accuracy Ship | Accuracy Fountain | Accuracy Conveyer belt | Accuracy Canopy | Accuracy Washer | Accuracy Plaything | Accuracy Swimming pool | Accuracy Stool | Accuracy Barrel | Accuracy Basket | Accuracy Waterfall | Accuracy Tent | Accuracy Bag | Accuracy Minibike | Accuracy Cradle | Accuracy Oven | Accuracy Ball | Accuracy Food | Accuracy Step | Accuracy Tank | Accuracy Trade name | Accuracy Microwave | Accuracy Pot | Accuracy Animal | Accuracy Bicycle | Accuracy Lake | Accuracy Dishwasher | Accuracy Screen | Accuracy Blanket | Accuracy Sculpture | Accuracy Hood | Accuracy Sconce | Accuracy Vase | Accuracy Traffic light | Accuracy Tray | Accuracy Ashcan | Accuracy Fan | Accuracy Pier | Accuracy Crt screen | Accuracy Plate | Accuracy Monitor | Accuracy Bulletin board | Accuracy Shower | Accuracy Radiator | Accuracy Glass | Accuracy Clock | Accuracy Flag | Iou Wall | Iou Building | Iou Sky | Iou Floor | Iou Tree | Iou Ceiling | Iou Road | Iou Bed | Iou Windowpane | Iou Grass | Iou Cabinet | Iou Sidewalk | Iou Person | Iou Earth | Iou Door | Iou Table | Iou Mountain | Iou Plant | Iou Curtain | Iou Chair | Iou Car | Iou Water | Iou Painting | Iou Sofa | Iou Shelf | Iou House | Iou Sea | Iou Mirror | Iou Rug | Iou Field | Iou Armchair | Iou Seat | Iou Fence | Iou Desk | Iou Rock | Iou Wardrobe | Iou Lamp | Iou Bathtub | Iou Railing | Iou Cushion | Iou Base | Iou Box | Iou Column | Iou Signboard | Iou Chest of drawers | Iou Counter | Iou Sand | Iou Sink | Iou Skyscraper | Iou Fireplace | Iou Refrigerator | Iou Grandstand | Iou Path | Iou Stairs | Iou Runway | Iou Case | Iou Pool table | Iou Pillow | Iou Screen door | Iou Stairway | Iou River | Iou Bridge | Iou Bookcase | Iou Blind | Iou Coffee table | Iou Toilet | Iou Flower | Iou Book | Iou Hill | Iou Bench | Iou Countertop | Iou Stove | Iou Palm | Iou Kitchen island | Iou Computer | Iou Swivel chair | Iou Boat | Iou Bar | Iou Arcade machine | Iou Hovel | Iou Bus | Iou Towel | Iou Light | Iou Truck | Iou Tower | Iou Chandelier | Iou Awning | Iou Streetlight | Iou Booth | Iou Television receiver | Iou Airplane | Iou Dirt track | Iou Apparel | Iou Pole | Iou Land | Iou Bannister | Iou Escalator | Iou Ottoman | Iou Bottle | Iou Buffet | Iou Poster | Iou Stage | Iou Van | Iou Ship | Iou Fountain | Iou Conveyer belt | Iou Canopy | Iou Washer | Iou Plaything | Iou Swimming pool | Iou Stool | Iou Barrel | Iou Basket | Iou Waterfall | Iou Tent | Iou Bag | Iou Minibike | Iou Cradle | Iou Oven | Iou Ball | Iou Food | Iou Step | Iou Tank | Iou Trade name | Iou Microwave | Iou Pot | Iou Animal | Iou Bicycle | Iou Lake | Iou Dishwasher | Iou Screen | Iou Blanket | Iou Sculpture | Iou Hood | Iou Sconce | Iou Vase | Iou Traffic light | Iou Tray | Iou Ashcan | Iou Fan | Iou Pier | Iou Crt screen | Iou Plate | Iou Monitor | Iou Bulletin board | Iou Shower | Iou Radiator | Iou Glass | Iou Clock | Iou Flag | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-------------:|:-----------------:|:------------:|:--------------:|:-------------:|:----------------:|:-------------:|:-------------:|:-------------------:|:--------------:|:----------------:|:-----------------:|:---------------:|:--------------:|:-------------:|:--------------:|:-----------------:|:--------------:|:----------------:|:--------------:|:------------:|:--------------:|:-----------------:|:-------------:|:--------------:|:--------------:|:------------:|:---------------:|:------------:|:--------------:|:-----------------:|:-------------:|:--------------:|:-------------:|:-------------:|:-----------------:|:-------------:|:----------------:|:----------------:|:----------------:|:-------------:|:------------:|:---------------:|:------------------:|:-------------------------:|:----------------:|:-------------:|:-------------:|:-------------------:|:------------------:|:---------------------:|:-------------------:|:-------------:|:---------------:|:---------------:|:-------------:|:-------------------:|:---------------:|:--------------------:|:-----------------:|:--------------:|:---------------:|:-----------------:|:--------------:|:---------------------:|:---------------:|:---------------:|:-------------:|:-------------:|:--------------:|:-------------------:|:--------------:|:-------------:|:-----------------------:|:-----------------:|:---------------------:|:-------------:|:------------:|:-----------------------:|:--------------:|:------------:|:--------------:|:--------------:|:--------------:|:--------------:|:-------------------:|:---------------:|:--------------------:|:--------------:|:----------------------------:|:-----------------:|:-------------------:|:----------------:|:-------------:|:-------------:|:------------------:|:------------------:|:----------------:|:---------------:|:---------------:|:---------------:|:--------------:|:------------:|:-------------:|:-----------------:|:----------------------:|:---------------:|:---------------:|:------------------:|:----------------------:|:--------------:|:---------------:|:---------------:|:------------------:|:-------------:|:------------:|:-----------------:|:---------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------------:|:------------------:|:------------:|:---------------:|:----------------:|:-------------:|:-------------------:|:---------------:|:----------------:|:------------------:|:-------------:|:---------------:|:-------------:|:----------------------:|:-------------:|:---------------:|:------------:|:-------------:|:-------------------:|:--------------:|:----------------:|:-----------------------:|:---------------:|:-----------------:|:--------------:|:--------------:|:-------------:|:--------:|:------------:|:-------:|:---------:|:--------:|:-----------:|:--------:|:--------:|:--------------:|:---------:|:-----------:|:------------:|:----------:|:---------:|:--------:|:---------:|:------------:|:---------:|:-----------:|:---------:|:-------:|:---------:|:------------:|:--------:|:---------:|:---------:|:-------:|:----------:|:-------:|:---------:|:------------:|:--------:|:---------:|:--------:|:--------:|:------------:|:--------:|:-----------:|:-----------:|:-----------:|:--------:|:-------:|:----------:|:-------------:|:--------------------:|:-----------:|:--------:|:--------:|:--------------:|:-------------:|:----------------:|:--------------:|:--------:|:----------:|:----------:|:--------:|:--------------:|:----------:|:---------------:|:------------:|:---------:|:----------:|:------------:|:---------:|:----------------:|:----------:|:----------:|:--------:|:--------:|:---------:|:--------------:|:---------:|:--------:|:------------------:|:------------:|:----------------:|:--------:|:-------:|:------------------:|:---------:|:-------:|:---------:|:---------:|:---------:|:---------:|:--------------:|:----------:|:---------------:|:---------:|:-----------------------:|:------------:|:--------------:|:-----------:|:--------:|:--------:|:-------------:|:-------------:|:-----------:|:----------:|:----------:|:----------:|:---------:|:-------:|:--------:|:------------:|:-----------------:|:----------:|:----------:|:-------------:|:-----------------:|:---------:|:----------:|:----------:|:-------------:|:--------:|:-------:|:------------:|:----------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------------:|:-------------:|:-------:|:----------:|:-----------:|:--------:|:--------------:|:----------:|:-----------:|:-------------:|:--------:|:----------:|:--------:|:-----------------:|:--------:|:----------:|:-------:|:--------:|:--------------:|:---------:|:-----------:|:------------------:|:----------:|:------------:|:---------:|:---------:|:--------:| | 3.7307 | 1.0 | 20 | 3.6709 | 0.0762 | 0.1185 | 0.3892 | nan | nan | nan | 0.9337 | nan | 0.8394 | nan | nan | 0.2598 | nan | 0.1684 | nan | nan | nan | 0.5156 | 0.0 | nan | 0.0 | 0.0490 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.1231 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0879 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1270 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0450 | 0.0 | nan | nan | nan | 0.0166 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0349 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.6672 | nan | 0.7637 | nan | nan | 0.2318 | nan | 0.0408 | nan | nan | 0.0 | 0.3370 | 0.0 | nan | 0.0 | 0.0436 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0587 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0737 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0573 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.0440 | 0.0 | nan | 0.0 | nan | 0.0162 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0293 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 3.4408 | 2.0 | 40 | 3.5943 | 0.0679 | 0.1166 | 0.3602 | nan | nan | nan | 0.8855 | nan | 0.9011 | nan | nan | 0.1773 | nan | 0.2616 | nan | nan | nan | 0.4185 | 0.0 | nan | 0.0 | 0.0319 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.1156 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0534 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.2110 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0829 | 0.0 | nan | nan | nan | 0.0071 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0018 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | 0.0 | 0.6857 | nan | 0.7235 | nan | nan | 0.1594 | nan | 0.0316 | nan | nan | 0.0 | 0.2811 | 0.0 | nan | 0.0 | 0.0308 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0441 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0447 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0852 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.0776 | 0.0 | nan | nan | nan | 0.0071 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0016 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 3.0273 | 3.0 | 60 | 3.2070 | 0.0781 | 0.1243 | 0.3756 | nan | nan | nan | 0.8704 | nan | 0.7804 | nan | nan | 0.2864 | nan | 0.2281 | nan | nan | nan | 0.4901 | 0.0 | nan | 0.0 | 0.0356 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.1337 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.1160 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.3318 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0682 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0157 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | 0.0 | 0.6933 | nan | 0.6975 | nan | nan | 0.2411 | nan | 0.0391 | nan | nan | nan | 0.3157 | 0.0 | nan | 0.0 | 0.0341 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0513 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0887 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1031 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.0651 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0148 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 3.0684 | 4.0 | 80 | 2.9131 | 0.0714 | 0.1060 | 0.3374 | nan | nan | nan | 0.8698 | nan | 0.6304 | nan | nan | 0.2153 | nan | 0.2075 | nan | nan | nan | 0.4244 | 0.0 | nan | 0.0 | 0.0103 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.1776 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0846 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1827 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0402 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0193 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.6823 | nan | 0.5799 | nan | nan | 0.1999 | nan | 0.0382 | nan | nan | nan | 0.3008 | 0.0 | nan | 0.0 | 0.0102 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0498 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0713 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0785 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.0398 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0189 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 2.6015 | 5.0 | 100 | 2.9167 | 0.0798 | 0.1238 | 0.3824 | nan | nan | nan | 0.8717 | nan | 0.6554 | nan | nan | 0.3902 | nan | 0.2137 | nan | nan | nan | 0.5598 | 0.0 | nan | 0.0 | 0.0131 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.1308 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.1105 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.3069 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0683 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0223 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.6862 | nan | 0.5693 | nan | nan | 0.2938 | nan | 0.0339 | nan | nan | nan | 0.3749 | 0.0 | nan | 0.0 | 0.0129 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0585 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0866 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1158 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.0631 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0202 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 2.5206 | 6.0 | 120 | 2.4916 | 0.0678 | 0.0976 | 0.2930 | nan | nan | nan | 0.8590 | nan | 0.5255 | nan | nan | 0.2684 | nan | 0.1911 | nan | nan | nan | 0.2381 | 0.0 | nan | 0.0 | 0.0192 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0792 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.1297 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.2600 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0538 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0102 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.6894 | nan | 0.4895 | nan | nan | 0.2423 | nan | 0.0371 | nan | nan | nan | 0.1928 | 0.0 | nan | 0.0 | 0.0186 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0312 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.1013 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1011 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.0523 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0101 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 2.4743 | 7.0 | 140 | 2.5774 | 0.0715 | 0.1082 | 0.3384 | nan | nan | nan | 0.8529 | nan | 0.3673 | nan | nan | 0.3321 | nan | 0.1759 | nan | nan | nan | 0.4891 | 0.0 | nan | 0.0 | 0.0115 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0925 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.1779 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.3724 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0408 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0079 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.6904 | nan | 0.3441 | nan | nan | 0.2802 | nan | 0.0586 | nan | nan | nan | 0.3329 | 0.0 | nan | 0.0 | 0.0113 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0473 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.1345 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1269 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.0405 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0077 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 2.1776 | 8.0 | 160 | 2.7845 | 0.0814 | 0.1254 | 0.3777 | nan | nan | nan | 0.9036 | nan | 0.7922 | nan | nan | 0.3098 | nan | 0.2159 | nan | nan | nan | 0.4430 | 0.0 | nan | 0.0 | 0.0366 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.1856 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.1604 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.2706 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0573 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0099 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.6793 | nan | 0.7057 | nan | nan | 0.2582 | nan | 0.0386 | nan | nan | nan | 0.2999 | 0.0 | nan | 0.0 | 0.0337 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0568 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.1159 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1090 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.0542 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0097 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 2.0472 | 9.0 | 180 | 2.6400 | 0.0833 | 0.1278 | 0.3932 | nan | nan | nan | 0.9040 | nan | 0.6452 | nan | nan | 0.3903 | nan | 0.0944 | nan | nan | nan | 0.5439 | 0.0 | nan | 0.0 | 0.0058 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.2171 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.2368 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.3499 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0536 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0100 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.7010 | nan | 0.5817 | nan | nan | 0.3207 | nan | 0.0256 | nan | nan | nan | 0.3572 | 0.0 | nan | 0.0 | 0.0057 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0736 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.1681 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1222 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.0493 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0097 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 2.3246 | 10.0 | 200 | 2.5884 | 0.0768 | 0.1182 | 0.3592 | nan | nan | nan | 0.8991 | nan | 0.5614 | nan | nan | 0.2854 | nan | 0.1735 | nan | nan | nan | 0.4508 | 0.0 | nan | 0.0 | 0.0084 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.1819 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.2253 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.3243 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0599 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0212 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.6649 | nan | 0.5264 | nan | nan | 0.2491 | nan | 0.0407 | nan | nan | nan | 0.3112 | 0.0 | nan | 0.0 | 0.0083 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0634 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.1574 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1304 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.0576 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0190 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 2.4483 | 11.0 | 220 | 2.4969 | 0.0797 | 0.1181 | 0.3600 | nan | nan | nan | 0.8680 | nan | 0.5599 | nan | nan | 0.3850 | nan | 0.1539 | nan | nan | nan | 0.4462 | 0.0 | nan | 0.0 | 0.0142 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.1963 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.2713 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.2448 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0406 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0085 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.7220 | nan | 0.5192 | nan | nan | 0.3350 | nan | 0.0497 | nan | nan | nan | 0.2905 | 0.0 | nan | 0.0 | 0.0139 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0669 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.1728 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0949 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.0390 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0083 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 2.1924 | 12.0 | 240 | 2.3775 | 0.0803 | 0.1191 | 0.3624 | nan | nan | nan | 0.8623 | nan | 0.5137 | nan | nan | 0.3939 | nan | 0.1541 | nan | nan | nan | 0.4755 | 0.0 | nan | 0.0 | 0.0214 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.1826 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.2605 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.2788 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0686 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0042 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.7170 | nan | 0.4816 | nan | nan | 0.3507 | nan | 0.0398 | nan | nan | nan | 0.3125 | 0.0 | nan | 0.0 | 0.0201 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0631 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.1739 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1024 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.0627 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0040 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 1.7922 | 13.0 | 260 | 2.6384 | 0.0772 | 0.1202 | 0.3536 | nan | nan | nan | 0.8945 | nan | 0.6338 | nan | nan | 0.2847 | nan | 0.1794 | nan | nan | nan | 0.3634 | 0.0 | nan | 0.0 | 0.0030 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.2979 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.2994 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.2060 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0593 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0246 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.7121 | nan | 0.5905 | nan | nan | 0.2261 | nan | 0.0450 | nan | nan | nan | 0.2516 | 0.0 | nan | 0.0 | 0.0029 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0648 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.1923 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0759 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.0546 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0238 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 1.9423 | 14.0 | 280 | 2.2627 | 0.0690 | 0.1022 | 0.3079 | nan | nan | nan | 0.7994 | nan | 0.4117 | nan | nan | 0.2951 | nan | 0.1557 | nan | nan | nan | 0.3473 | 0.0 | nan | 0.0 | 0.0081 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.2878 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.2343 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1684 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0493 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0029 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.7077 | nan | 0.3900 | nan | nan | 0.2541 | nan | 0.0544 | nan | nan | nan | 0.2497 | 0.0 | nan | 0.0 | 0.0080 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0659 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.1522 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0709 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.0455 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0029 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 1.6361 | 15.0 | 300 | 2.3295 | 0.0764 | 0.1206 | 0.3597 | nan | nan | nan | 0.9168 | nan | 0.3962 | nan | nan | 0.2983 | nan | 0.1203 | nan | nan | nan | 0.4207 | 0.0 | nan | 0.0 | 0.0624 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.2511 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.3480 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.3068 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.1194 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0155 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.6984 | nan | 0.3713 | nan | nan | 0.2573 | nan | 0.0376 | nan | nan | nan | 0.2910 | 0.0 | nan | 0.0 | 0.0600 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0686 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.2036 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1160 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.0973 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0149 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 2.1016 | 16.0 | 320 | 2.1744 | 0.0823 | 0.1192 | 0.3604 | nan | nan | nan | 0.8993 | nan | 0.5917 | nan | nan | 0.3581 | nan | 0.1279 | nan | nan | nan | 0.4045 | 0.0 | nan | 0.0 | 0.0219 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.1784 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.2816 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.2195 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.1256 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0104 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.7188 | nan | 0.5521 | nan | nan | 0.3140 | nan | 0.0493 | nan | nan | nan | 0.2723 | 0.0 | nan | 0.0 | 0.0212 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0722 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.1674 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1035 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.1069 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0100 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 1.733 | 17.0 | 340 | 2.2739 | 0.0927 | 0.1402 | 0.4066 | nan | nan | nan | 0.8683 | nan | 0.7568 | nan | nan | 0.4403 | nan | 0.1621 | nan | nan | nan | 0.5139 | 0.0 | nan | 0.0 | 0.0309 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.2857 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.3135 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1905 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.2077 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0153 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.7014 | nan | 0.6739 | nan | nan | 0.3534 | nan | 0.0468 | nan | nan | nan | 0.3401 | 0.0 | nan | 0.0 | 0.0297 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0883 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.2005 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0859 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.1550 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0147 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 1.1858 | 18.0 | 360 | 2.1393 | 0.0740 | 0.1145 | 0.3248 | nan | nan | nan | 0.8637 | nan | 0.3618 | nan | nan | 0.2877 | nan | 0.1278 | nan | nan | nan | 0.3133 | 0.0 | nan | 0.0 | 0.0285 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.3640 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.3416 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.2189 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.1729 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0101 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.7213 | nan | 0.3430 | nan | nan | 0.2506 | nan | 0.0444 | nan | nan | nan | 0.2418 | 0.0 | nan | 0.0 | 0.0283 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0971 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.1943 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0896 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.1252 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0100 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 1.1789 | 19.0 | 380 | 2.0453 | 0.0784 | 0.1183 | 0.3378 | nan | nan | nan | 0.8792 | nan | 0.3694 | nan | nan | 0.3834 | nan | 0.1103 | nan | nan | nan | 0.3530 | 0.0 | nan | 0.0 | 0.0076 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.2411 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.3444 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.2493 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.2350 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0209 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.7164 | nan | 0.3504 | nan | nan | 0.3335 | nan | 0.0385 | nan | nan | nan | 0.2577 | 0.0 | nan | 0.0 | 0.0075 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0910 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.2023 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1081 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.1487 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0197 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 2.1029 | 20.0 | 400 | 2.0401 | 0.0808 | 0.1198 | 0.3342 | nan | nan | nan | 0.8540 | nan | 0.2567 | nan | nan | 0.5222 | nan | 0.0692 | nan | nan | nan | 0.3618 | 0.0 | nan | 0.0 | 0.0364 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.2521 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.2792 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.3148 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.2615 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0269 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.7270 | nan | 0.2456 | nan | nan | 0.4455 | nan | 0.0283 | nan | nan | nan | 0.2664 | 0.0 | nan | 0.0 | 0.0359 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0979 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.1634 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1488 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.1582 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0248 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 0.9199 | 21.0 | 420 | 2.0343 | 0.0867 | 0.1289 | 0.3561 | nan | nan | nan | 0.8729 | nan | 0.5154 | nan | nan | 0.3884 | nan | 0.0932 | nan | nan | nan | 0.3565 | 0.0 | nan | 0.0 | 0.0551 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.2016 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.3379 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.3751 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.2417 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0419 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.7042 | nan | 0.4849 | nan | nan | 0.3426 | nan | 0.0328 | nan | nan | nan | 0.2556 | 0.0 | nan | 0.0 | 0.0543 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0841 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.1930 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1753 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.1498 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0380 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 1.3104 | 22.0 | 440 | 2.1901 | 0.0951 | 0.1491 | 0.4017 | nan | nan | nan | 0.8589 | nan | 0.5141 | nan | nan | 0.5042 | nan | 0.1357 | nan | nan | nan | 0.5095 | 0.0 | nan | 0.0 | 0.0451 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.2190 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.4032 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.4488 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.3279 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0598 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.7316 | nan | 0.4826 | nan | nan | 0.4098 | nan | 0.0376 | nan | nan | nan | 0.3160 | 0.0 | nan | 0.0 | 0.0437 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0879 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.2197 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1928 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.1890 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0471 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 1.2897 | 23.0 | 460 | 2.0365 | 0.0801 | 0.1127 | 0.3254 | nan | nan | nan | 0.8312 | nan | 0.3108 | nan | nan | 0.3736 | nan | 0.0641 | nan | nan | nan | 0.3423 | 0.0 | nan | 0.0 | 0.1225 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.2081 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.2688 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.2975 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.1883 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0352 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.7205 | nan | 0.2957 | nan | nan | 0.3317 | nan | 0.0243 | nan | nan | nan | 0.2739 | 0.0 | nan | 0.0 | 0.1134 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0861 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.1589 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1475 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.1375 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0331 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 1.203 | 24.0 | 480 | 2.0482 | 0.0893 | 0.1401 | 0.3735 | nan | nan | nan | 0.8474 | nan | 0.5972 | nan | nan | 0.4314 | nan | 0.1605 | nan | nan | nan | 0.3670 | 0.0 | nan | 0.0 | 0.0987 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.2848 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.3406 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.2681 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.3341 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0524 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.7184 | nan | 0.5601 | nan | nan | 0.3518 | nan | 0.0527 | nan | 0.0 | nan | 0.2691 | 0.0 | nan | 0.0 | 0.0940 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0875 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.2008 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1232 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.1786 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0434 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 0.9328 | 25.0 | 500 | 1.9838 | 0.0926 | 0.1380 | 0.3787 | nan | nan | nan | 0.8721 | nan | 0.5630 | nan | nan | 0.4753 | nan | 0.0706 | nan | nan | nan | 0.3878 | 0.0 | nan | 0.0 | 0.0980 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.2447 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.3440 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.3089 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.3021 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0587 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.7093 | nan | 0.5292 | nan | nan | 0.3888 | nan | 0.0268 | nan | nan | nan | 0.2880 | 0.0 | nan | 0.0 | 0.0936 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0913 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.1900 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1464 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.1748 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0470 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 1.2277 | 26.0 | 520 | 1.9951 | 0.0783 | 0.1172 | 0.3428 | nan | nan | nan | 0.8447 | nan | 0.3917 | nan | nan | 0.3695 | nan | 0.0198 | nan | nan | nan | 0.3938 | 0.0 | nan | 0.0 | 0.0602 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.2196 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.3090 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1950 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.2716 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0891 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.7165 | nan | 0.3712 | nan | nan | 0.3165 | nan | 0.0079 | nan | 0.0 | nan | 0.2866 | 0.0 | nan | 0.0 | 0.0583 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0820 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.1700 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0964 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.1791 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0644 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 1.5592 | 27.0 | 540 | 1.9680 | 0.0946 | 0.1407 | 0.3773 | nan | nan | nan | 0.8522 | nan | 0.5544 | nan | nan | 0.5550 | nan | 0.1558 | nan | nan | nan | 0.3897 | 0.0 | nan | 0.0 | 0.0567 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.2495 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.3453 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.2300 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.3083 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.1016 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.7214 | nan | 0.5181 | nan | nan | 0.4630 | nan | 0.0491 | nan | nan | nan | 0.2917 | 0.0 | nan | 0.0 | 0.0554 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0721 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.2299 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1054 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.1642 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0724 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 1.0019 | 28.0 | 560 | 1.9639 | 0.0908 | 0.1334 | 0.3647 | nan | nan | nan | 0.8695 | nan | 0.3846 | nan | nan | 0.3890 | nan | 0.0948 | nan | nan | nan | 0.4244 | 0.0 | nan | 0.0 | 0.0984 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.1844 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.2985 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.3986 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.3432 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.1166 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.7342 | nan | 0.3616 | nan | nan | 0.3444 | nan | 0.0414 | nan | nan | nan | 0.3065 | 0.0 | nan | 0.0 | 0.0924 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0979 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.1968 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1932 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.1915 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0733 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 1.6683 | 29.0 | 580 | 1.9628 | 0.0870 | 0.1361 | 0.3595 | nan | nan | nan | 0.8653 | nan | 0.4252 | nan | nan | 0.4137 | nan | 0.1184 | nan | nan | nan | 0.3397 | 0.0 | nan | 0.0 | 0.1231 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.2211 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.3693 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.3765 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.3196 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.1029 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.7198 | nan | 0.4009 | nan | nan | 0.3494 | nan | 0.0389 | nan | 0.0 | nan | 0.2722 | 0.0 | nan | 0.0 | 0.1120 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0787 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.2286 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1712 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.1665 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0707 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 1.0931 | 30.0 | 600 | 1.9805 | 0.0834 | 0.1201 | 0.3378 | nan | nan | nan | 0.8342 | nan | 0.4017 | nan | nan | 0.3281 | nan | 0.0643 | nan | nan | nan | 0.3552 | 0.0 | nan | 0.0 | 0.0952 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.2371 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.3072 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.2482 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.2749 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0973 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.7272 | nan | 0.3793 | nan | nan | 0.2982 | nan | 0.0239 | nan | nan | nan | 0.2712 | 0.0 | nan | 0.0 | 0.0890 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0761 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.1829 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1246 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.1809 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0655 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 1.7162 | 31.0 | 620 | 1.9101 | 0.0782 | 0.1200 | 0.3269 | nan | nan | nan | 0.8463 | nan | 0.2432 | nan | nan | 0.4627 | nan | 0.0613 | nan | nan | nan | 0.3400 | 0.0 | nan | 0.0 | 0.0639 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.2061 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.2557 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.3412 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.3306 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0882 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.7210 | nan | 0.2317 | nan | nan | 0.4057 | nan | 0.0268 | nan | 0.0 | nan | 0.2729 | 0.0 | nan | 0.0 | 0.0599 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0797 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.1583 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1602 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.1683 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0613 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 0.7559 | 32.0 | 640 | 1.9667 | 0.0807 | 0.1221 | 0.3382 | nan | nan | nan | 0.8258 | nan | 0.5556 | nan | nan | 0.3414 | nan | 0.0658 | nan | nan | nan | 0.3382 | 0.0 | nan | 0.0 | 0.0290 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.2145 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.2896 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.2792 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.2712 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0870 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.7148 | nan | 0.5219 | nan | nan | 0.2901 | nan | 0.0265 | nan | 0.0 | nan | 0.2626 | 0.0 | nan | 0.0 | 0.0272 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0779 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.1736 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1232 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.1441 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0580 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 0.9704 | 33.0 | 660 | 1.9224 | 0.0942 | 0.1408 | 0.3701 | nan | nan | nan | 0.8615 | nan | 0.5229 | nan | nan | 0.5370 | nan | 0.0845 | nan | nan | nan | 0.3486 | 0.0 | nan | 0.0 | 0.0808 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.2055 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.3493 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.3414 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.3190 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.1498 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.7270 | nan | 0.4935 | nan | nan | 0.4231 | nan | 0.0345 | nan | nan | nan | 0.2685 | 0.0 | nan | 0.0 | 0.0741 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0844 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.1898 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1647 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.1832 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0889 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 0.7972 | 34.0 | 680 | 1.9500 | 0.0896 | 0.1396 | 0.3767 | nan | nan | nan | 0.8568 | nan | 0.5347 | nan | nan | 0.5623 | nan | 0.0888 | nan | nan | nan | 0.3952 | 0.0 | nan | 0.0 | 0.0358 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.2119 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.3740 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.2438 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.3139 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.1521 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.7273 | nan | 0.5049 | nan | nan | 0.4432 | nan | 0.0348 | nan | 0.0 | nan | 0.2889 | 0.0 | nan | 0.0 | 0.0337 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0727 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.2302 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1045 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.1665 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0803 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 1.1935 | 35.0 | 700 | 1.9230 | 0.0835 | 0.1237 | 0.3451 | nan | nan | nan | 0.8381 | nan | 0.6068 | nan | nan | 0.3931 | nan | 0.0558 | nan | nan | nan | 0.3149 | 0.0 | nan | 0.0 | 0.0454 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.2198 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.3283 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.2288 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.2040 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.1054 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.7173 | nan | 0.5718 | nan | nan | 0.3338 | nan | 0.0202 | nan | 0.0 | nan | 0.2580 | 0.0 | nan | 0.0 | 0.0416 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0809 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.1847 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1000 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.1322 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0662 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 0.6306 | 36.0 | 720 | 1.9467 | 0.0823 | 0.1229 | 0.3397 | nan | nan | nan | 0.8467 | nan | 0.4489 | nan | nan | 0.4282 | nan | 0.0682 | nan | nan | nan | 0.3121 | 0.0 | nan | 0.0 | 0.0854 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.2312 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.3093 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.2315 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.2415 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.1149 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.7226 | nan | 0.4246 | nan | nan | 0.3735 | nan | 0.0271 | nan | 0.0 | nan | 0.2496 | 0.0 | nan | 0.0 | 0.0788 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0804 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.1913 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1025 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.1466 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0721 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 0.8992 | 37.0 | 740 | 1.8487 | 0.0907 | 0.1386 | 0.3742 | nan | nan | nan | 0.8576 | nan | 0.6168 | nan | nan | 0.4130 | nan | 0.0173 | nan | nan | nan | 0.3706 | 0.0 | nan | 0.0 | 0.0834 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.2069 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.3601 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.3178 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.3352 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.1629 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.7198 | nan | 0.5800 | nan | nan | 0.3604 | nan | 0.0077 | nan | 0.0 | nan | 0.2786 | 0.0 | nan | 0.0 | 0.0762 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0930 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.1968 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1421 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.1817 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0839 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 0.8158 | 38.0 | 760 | 1.9149 | 0.0883 | 0.1332 | 0.3662 | nan | nan | nan | 0.8463 | nan | 0.5861 | nan | nan | 0.3940 | nan | 0.0855 | nan | nan | nan | 0.3850 | 0.0 | nan | 0.0 | 0.0762 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.2210 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.3116 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.2392 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.2965 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.1541 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.7200 | nan | 0.5510 | nan | nan | 0.3442 | nan | 0.0385 | nan | 0.0 | nan | 0.2853 | 0.0 | nan | 0.0 | 0.0687 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0686 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.2015 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1123 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.1792 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0807 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 0.7208 | 39.0 | 780 | 1.8926 | 0.0865 | 0.1324 | 0.3476 | nan | nan | nan | 0.8363 | nan | 0.4742 | nan | nan | 0.4646 | nan | 0.0691 | nan | nan | nan | 0.3187 | 0.0 | nan | 0.0 | 0.0807 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.2120 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.3116 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.3915 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.2574 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.1590 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.7189 | nan | 0.4473 | nan | nan | 0.3985 | nan | 0.0288 | nan | 0.0 | nan | 0.2638 | 0.0 | nan | 0.0 | 0.0738 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0889 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.1828 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1593 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.1521 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0801 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 0.4526 | 40.0 | 800 | 1.8917 | 0.0861 | 0.1293 | 0.3495 | nan | nan | nan | 0.8316 | nan | 0.5129 | nan | nan | 0.4115 | nan | 0.0770 | nan | nan | nan | 0.3463 | 0.0 | nan | 0.0 | 0.0731 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.2139 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.3009 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.2952 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.2768 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.1523 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.7220 | nan | 0.4838 | nan | nan | 0.3564 | nan | 0.0299 | nan | 0.0 | nan | 0.2737 | 0.0 | nan | 0.0 | 0.0663 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0774 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.1923 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1332 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.1703 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0776 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 0.9559 | 41.0 | 820 | 1.9104 | 0.0943 | 0.1433 | 0.3756 | nan | nan | nan | 0.8392 | nan | 0.7125 | nan | nan | 0.5193 | nan | 0.0690 | nan | nan | nan | 0.3553 | 0.0 | nan | 0.0 | 0.0323 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.2138 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.3192 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.3271 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.3195 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.1611 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.7211 | nan | 0.6652 | nan | nan | 0.4339 | nan | 0.0285 | nan | 0.0 | nan | 0.2707 | 0.0 | nan | 0.0 | 0.0299 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0813 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.2025 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1477 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.1693 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0796 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 0.5018 | 42.0 | 840 | 1.8932 | 0.0911 | 0.1378 | 0.3681 | nan | nan | nan | 0.8567 | nan | 0.5492 | nan | nan | 0.5049 | nan | 0.0660 | nan | nan | nan | 0.3575 | 0.0 | nan | 0.0 | 0.0726 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.2151 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.3230 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.2782 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.3271 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.1694 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.7282 | nan | 0.5161 | nan | nan | 0.4305 | nan | 0.0302 | nan | 0.0 | nan | 0.2769 | 0.0 | nan | 0.0 | 0.0672 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0795 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.2015 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1328 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.1898 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0817 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 0.947 | 43.0 | 860 | 1.8887 | 0.0912 | 0.1382 | 0.3714 | nan | nan | nan | 0.8465 | nan | 0.6056 | nan | nan | 0.4546 | nan | 0.0483 | nan | nan | nan | 0.3662 | 0.0 | nan | 0.0 | 0.0851 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.2132 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.3330 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.2809 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.3135 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.1848 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.7239 | nan | 0.5688 | nan | nan | 0.3905 | nan | 0.0220 | nan | 0.0 | nan | 0.2813 | 0.0 | nan | 0.0 | 0.0773 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0800 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.1950 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1272 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.1874 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0826 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 0.7007 | 44.0 | 880 | 1.9000 | 0.0875 | 0.1308 | 0.3529 | nan | nan | nan | 0.8304 | nan | 0.6049 | nan | nan | 0.4529 | nan | 0.0449 | nan | nan | nan | 0.3287 | 0.0 | nan | 0.0 | 0.0427 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.2159 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.3147 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.2486 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.2906 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.1560 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.7220 | nan | 0.5691 | nan | nan | 0.3859 | nan | 0.0198 | nan | 0.0 | nan | 0.2559 | 0.0 | nan | 0.0 | 0.0402 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0767 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.1898 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1140 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.1753 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0765 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 0.785 | 45.0 | 900 | 1.8730 | 0.0914 | 0.1371 | 0.3646 | nan | nan | nan | 0.8517 | nan | 0.5682 | nan | nan | 0.4879 | nan | 0.0613 | nan | nan | nan | 0.3267 | 0.0 | nan | 0.0 | 0.1077 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.2109 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.3246 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.2757 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.3243 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.1616 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.7265 | nan | 0.5348 | nan | nan | 0.4103 | nan | 0.0269 | nan | 0.0 | nan | 0.2638 | 0.0 | nan | 0.0 | 0.0980 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0828 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.1984 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1301 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.1903 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0798 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 0.5925 | 46.0 | 920 | 1.8882 | 0.0887 | 0.1340 | 0.3635 | nan | nan | nan | 0.8662 | nan | 0.5111 | nan | nan | 0.4748 | nan | 0.0518 | nan | nan | nan | 0.3426 | 0.0 | nan | 0.0 | 0.0990 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.2182 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.3307 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.2379 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.3069 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.1792 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.7269 | nan | 0.4821 | nan | nan | 0.3928 | nan | 0.0236 | nan | 0.0 | nan | 0.2737 | 0.0 | nan | 0.0 | 0.0904 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0766 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.2065 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1130 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.1915 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0841 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 0.7732 | 47.0 | 940 | 1.8782 | 0.0874 | 0.1339 | 0.3639 | nan | nan | nan | 0.8603 | nan | 0.5207 | nan | nan | 0.4583 | nan | 0.0531 | nan | nan | nan | 0.3575 | 0.0 | nan | 0.0 | 0.0779 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.2191 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.3398 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.2444 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.3134 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.1698 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.7239 | nan | 0.4904 | nan | nan | 0.3751 | nan | 0.0237 | nan | 0.0 | nan | 0.2797 | 0.0 | nan | 0.0 | 0.0719 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0733 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.2097 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1109 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.1839 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0782 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 0.8092 | 48.0 | 960 | 1.8683 | 0.0879 | 0.1348 | 0.3597 | nan | nan | nan | 0.8459 | nan | 0.5058 | nan | nan | 0.4731 | nan | 0.0656 | nan | nan | nan | 0.3583 | 0.0 | nan | 0.0 | 0.0554 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.2133 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.3335 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.3061 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.3165 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.1650 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.7226 | nan | 0.4771 | nan | nan | 0.3972 | nan | 0.0292 | nan | 0.0 | nan | 0.2743 | 0.0 | nan | 0.0 | 0.0517 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0812 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.2042 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1374 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.1835 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0776 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 0.583 | 49.0 | 980 | 1.8752 | 0.0902 | 0.1381 | 0.3690 | nan | nan | nan | 0.8486 | nan | 0.5583 | nan | nan | 0.4531 | nan | 0.0761 | nan | nan | nan | 0.3727 | 0.0 | nan | 0.0 | 0.0749 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.2149 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.3347 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.3022 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.3180 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.1743 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.7232 | nan | 0.5253 | nan | nan | 0.3853 | nan | 0.0317 | nan | 0.0 | nan | 0.2828 | 0.0 | nan | 0.0 | 0.0688 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0794 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.2090 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1334 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.1867 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0796 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | | 0.8064 | 50.0 | 1000 | 1.8839 | 0.0900 | 0.1366 | 0.3644 | nan | nan | nan | 0.8407 | nan | 0.5637 | nan | nan | 0.4489 | nan | 0.0771 | nan | nan | nan | 0.3647 | 0.0 | nan | 0.0 | 0.0604 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.2133 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.3360 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.2925 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.3142 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.1774 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | nan | 0.7245 | nan | 0.5304 | nan | nan | 0.3857 | nan | 0.0317 | nan | 0.0 | nan | 0.2767 | 0.0 | nan | 0.0 | 0.0558 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0824 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.2056 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.1332 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.1932 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0818 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | ### Framework versions - Transformers 4.40.1 - Pytorch 2.3.0+cu118 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "other", "tags": ["generated_from_trainer"], "datasets": ["scene_parse_150"], "base_model": "nvidia/mit-b0", "model-index": [{"name": "segformer-b0-scene-parse-150", "results": []}]}
DjoDja/segformer-b0-scene-parse-150
null
[ "transformers", "safetensors", "segformer", "generated_from_trainer", "dataset:scene_parse_150", "base_model:nvidia/mit-b0", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-30T18:30:03+00:00