modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-05 12:28:32
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
468 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-05 12:27:45
card
stringlengths
11
1.01M
brooksideas/gpt-2-finetuned-wikitext2
brooksideas
2024-02-23T02:06:28Z
9
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-22T23:43:43Z
--- license: mit base_model: openai-community/gpt2 tags: - generated_from_trainer model-index: - name: gpt-2-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt-2-finetuned-wikitext2 This model is a fine-tuned version of [openai-community/gpt2](https://huggingface.co/openai-community/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.3924 ## Model Description This language model is built on the GPT-2 architecture provided by OpenAI. The tokenizer utilized for preprocessing text data is OpenAI's tikToken. For more details on tikToken, you can refer to the [official GitHub repository](https://github.com/openai/tiktoken). ### Tokenizer Overview To interactively explore the functionality and behavior of the tikToken tokenizer, you can use the [tikToken interactive website](https://tiktokenizer.vercel.app/). This website allows you to quickly visualize the tokenization process and understand how the tokenizer segments input text into tokens. ### Model Checkpoint The model checkpoint used in this implementation is sourced from the OpenAI community and is based on the GPT-2 architecture. You can find the specific model checkpoint at the following Hugging Face Model Hub link: [openai-community/gpt2](https://huggingface.co/openai-community/gpt2). ### Training Details The model was trained for a total of 3 epochs on the provided dataset. This information reflects the number of times the entire training dataset was processed during the training phase. Training for a specific number of epochs helps control the duration and scope of the model's learning process. ## Training and evaluation data #### Evaluation Data For evaluating the model's performance, the training script utilized an evaluation dataset. #### Evaluation Results After training, the model's performance was assessed using the evaluation dataset. The perplexity, a common metric for language modeling tasks was **Perplexity: 29.74** ```python eval_results = trainer.evaluate() print(f"Perplexity: {math.exp(eval_results['eval_loss']):.2f}") >>> Perplexity : 29.74 ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.4934 | 1.0 | 2334 | 3.4145 | | 3.3567 | 2.0 | 4668 | 3.3953 | | 3.2968 | 3.0 | 7002 | 3.3924 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
furrutiav/bert_qa_extractor_cockatiel_2022_ulra_by_kmeans_Q_nllf_sub_best_by_z_value_ef_signal_it_83
furrutiav
2024-02-23T02:05:50Z
6
0
transformers
[ "transformers", "safetensors", "bert", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
feature-extraction
2024-02-23T02:05:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
NLUHOPOE/test-case-1
NLUHOPOE
2024-02-23T02:01:13Z
50
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "en", "dataset:Open-Orca/SlimOrca", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-23T00:30:16Z
--- license: apache-2.0 datasets: - Open-Orca/SlimOrca language: - en --- # Model Details * Model Description: This model is test for data ordering. * Developed by: Juhwan Lee * Model Type: Large Language Model # Model Architecture This model is based on Mistral-7B-v0.1. We fine-tuning this model for data ordering task. Mistral-7B-v0.1 is a transformer model, with the following architecture choices: * Grouped-Query Attention * Sliding-Window Attention * Byte-fallback BPE tokenizer # Dataset We random sample SlimOrca dataset. # Guthub https://github.com/trailerAI # License Apache License 2.0
brooksideas/distilroberta-base-finetuned-wikitext2
brooksideas
2024-02-23T02:00:31Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "fill-mask", "generated_from_trainer", "base_model:distilbert/distilroberta-base", "base_model:finetune:distilbert/distilroberta-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-02-20T01:59:20Z
--- license: apache-2.0 base_model: distilroberta-base tags: - generated_from_trainer model-index: - name: distilroberta-base-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-finetuned-wikitext2 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8269 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.0847 | 1.0 | 2406 | 1.9298 | | 1.9991 | 2.0 | 4812 | 1.8666 | | 1.9412 | 3.0 | 7218 | 1.8572 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
SoupChickn/Valeen-DialoGPT-2
SoupChickn
2024-02-23T01:59:59Z
5
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-23T01:21:07Z
--- library_name: transformers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** chatbot - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
furrutiav/bert_qa_extractor_cockatiel_2022_ulra_by_question_type_sub_best_by_mixtral_v2_ef_signal_it_115
furrutiav
2024-02-23T01:54:31Z
5
0
transformers
[ "transformers", "safetensors", "bert", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
feature-extraction
2024-02-23T01:54:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jisukim8873/falcon-7B-case-1
jisukim8873
2024-02-23T01:53:33Z
153
0
transformers
[ "transformers", "safetensors", "falcon", "text-generation", "custom_code", "en", "dataset:Open-Orca/SlimOrca", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-23T00:45:50Z
--- license: apache-2.0 datasets: - Open-Orca/SlimOrca language: - en --- # Model Details * Model Description: This model is test for data ordering. * Developed by: Jisu Kim * Model Type: Large Language Model # Model Architecture This model is based on falcon-7B. We fine-tuning this model for data ordering task. falcon-7B is a transformer model, with the following architecture choices: * Grouped-Query Attention * Sliding-Window Attention * Byte-fallback BPE tokenizer # Dataset We random sample Open-Orca dataset. (We finetune the 100,000 dataset) # Guthub https://github.com/trailerAI # License Apache License 2.0
SUFEHeisenberg/Fin-RoBERTa
SUFEHeisenberg
2024-02-23T01:51:41Z
29
2
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "finance", "text-classification", "en", "dataset:financial_phrasebank", "dataset:pauri32/fiqa-2018", "dataset:zeroshot/twitter-financial-news-sentiment", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-23T01:15:14Z
--- license: apache-2.0 datasets: - financial_phrasebank - pauri32/fiqa-2018 - zeroshot/twitter-financial-news-sentiment language: - en metrics: - accuracy pipeline_tag: text-classification tags: - finance --- We collects financial domain terms from Investopedia's Financia terms dictionary, NYSSCPA's accounting terminology guide and Harvey's Hypertextual Finance Glossary to expand RoBERTa's vocab dict. Based on added-financial-terms RoBERTa, we pretrained our model on multilple financial corpus: - Financial Terms - [Investopedia's Financia terms dictionary](https://www.investopedia.com/financial-term-dictionary-4769738) - [NYSSCPA's accounting terminology guide](https://www.nysscpa.org/professional-resources/accounting-terminology-guide) - [Harvey's Hypertextual Finance Glossary](https://people.duke.edu/~charvey/Classes/wpg/glossary.htm) - Financial Datasets - [FPB](https://huggingface.co/datasets/financial_phrasebank) - [FiQA SA](https://huggingface.co/datasets/pauri32/fiqa-2018) - [SemEval2017 Task5](https://aclanthology.org/S17-2089/) - [Twitter Financial News Sentiment](https://huggingface.co/datasets/zeroshot/twitter-financial-news-sentiment) - Earnings Call 2016-2023 NASDAQ 100 components stocks's Earnings Call Transcripts. In continual pretraining step, we apply following experiments settings to achieve better finetuned results on Four Financial Datasets: 1. Masking Probability: 0.4 (instead of default 0.15) 2. Warmup Steps: 0 (deriving better results than models with warmup steps) 3. Epochs: 1 (is enough in case of overfitting) 4. weight_decay: 0.01 5. Train Batch Size: 64 6. FP16
rockyclh/llama-2-7b-chat-entrepreneurship
rockyclh
2024-02-23T01:50:09Z
0
0
null
[ "safetensors", "autotrain", "text-generation", "conversational", "license:other", "endpoints_compatible", "region:us" ]
text-generation
2024-02-23T01:50:03Z
--- tags: - autotrain - text-generation widget: - text: "I love AutoTrain because " license: other --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
isabelarvelo/wav2vec_pretraining_output-finetuned-fb
isabelarvelo
2024-02-23T01:48:52Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "audio-classification", "generated_from_trainer", "endpoints_compatible", "region:us" ]
audio-classification
2024-02-22T05:06:57Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: wav2vec_finetuning_output results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec_finetuning_output This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3368 - Accuracy: 0.5338 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.3588 | 1.0 | 203 | 1.3368 | 0.5338 | | 1.2412 | 2.0 | 406 | 1.3360 | 0.5338 | | 1.3518 | 3.0 | 609 | 1.3296 | 0.5314 | | 1.3174 | 4.0 | 813 | 1.3107 | 0.5338 | | 1.3107 | 4.99 | 1015 | 1.3112 | 0.5338 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
HenseHsieh/a2c-PandaReachDense-v3
HenseHsieh
2024-02-23T01:39:50Z
0
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-02-23T01:35:48Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v3 type: PandaReachDense-v3 metrics: - type: mean_reward value: -0.24 +/- 0.11 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v3** This is a trained model of a **A2C** agent playing **PandaReachDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
furrutiav/bert_qa_extractor_cockatiel_2022_ulra_by_question_type_sub_best_ef_signal_it_123
furrutiav
2024-02-23T01:27:31Z
5
0
transformers
[ "transformers", "safetensors", "bert", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
feature-extraction
2024-02-23T01:27:03Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
rayliuca/TRagx-AWQ-Mistral-7B-Instruct-v0.2
rayliuca
2024-02-23T01:20:21Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "ja", "zh", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "awq", "region:us" ]
text-generation
2024-02-22T03:11:22Z
--- library_name: transformers license: apache-2.0 language: - en - ja - zh --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> Merged and AWQ quantized version of [rayliuca/TRagx-Mistral-7B-Instruct-v0.2](https://huggingface.co/rayliuca/TRagx-GPTQ-Mistral-7B-Instruct-v0.2)
emersoftware/robertalex-mlm-bcn-mnrl-msmarco-es
emersoftware
2024-02-23T01:12:20Z
66
0
sentence-transformers
[ "sentence-transformers", "pytorch", "roberta", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-02-23T01:11:40Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 6250 with parameters: ``` {'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
emersoftware/beto-mlm-bcn-mnrl-msmarco-es
emersoftware
2024-02-23T01:11:14Z
2
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-02-23T01:10:32Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 6250 with parameters: ``` {'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
gsomers-smarsh/gemma2b-pasta-fullFT
gsomers-smarsh
2024-02-23T01:10:24Z
6
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-23T01:05:33Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
olonok/flan-t5-base-pubmed-summarization
olonok
2024-02-23T01:08:43Z
5
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "dataset:pubmed-summarization", "base_model:google/flan-t5-base", "base_model:finetune:google/flan-t5-base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-23T01:08:05Z
--- license: apache-2.0 base_model: google/flan-t5-base tags: - generated_from_trainer datasets: - pubmed-summarization model-index: - name: flan-t5-base-pubmed-summarization results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-base-pubmed-summarization This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the pubmed-summarization dataset. It achieves the following results on the evaluation set: - Loss: 1.6534 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 1.8896 | 1.0 | 14991 | 1.7152 | | 1.8445 | 2.0 | 29982 | 1.6872 | | 1.8061 | 3.0 | 44973 | 1.6689 | | 1.7714 | 4.0 | 59964 | 1.6626 | | 1.7764 | 5.0 | 74955 | 1.6597 | | 1.7523 | 6.0 | 89946 | 1.6566 | | 1.752 | 7.0 | 104937 | 1.6545 | | 1.7281 | 8.0 | 119928 | 1.6538 | | 1.7523 | 9.0 | 134919 | 1.6534 | | 1.7439 | 10.0 | 149910 | 1.6534 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
mfidabel/Modelo_3_Whisper_Medium
mfidabel
2024-02-23T00:50:57Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:openai/whisper-medium", "base_model:adapter:openai/whisper-medium", "license:apache-2.0", "region:us" ]
null
2024-02-22T16:10:04Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: openai/whisper-medium model-index: - name: Modelo_3_Whisper_Medium results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Modelo_3_Whisper_Medium This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1357 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.6048 | 1.0 | 1295 | 0.4275 | | 0.4759 | 2.0 | 2590 | 0.3141 | | 0.3084 | 3.0 | 3885 | 0.2248 | | 0.1447 | 4.0 | 5180 | 0.1638 | | 0.0611 | 5.0 | 6475 | 0.1357 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.1.0+cu118 - Datasets 2.16.1 - Tokenizers 0.15.2
ddyuudd/dolly-v2-3b
ddyuudd
2024-02-23T00:45:13Z
9
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "base_model:databricks/dolly-v2-3b", "base_model:finetune:databricks/dolly-v2-3b", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-22T04:35:14Z
--- base_model: databricks/dolly-v2-3b license: mit --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
furrutiav/bert_qa_extractor_cockatiel_2022_ulra_by_kmeans_Q_nllf_s_sub_best_by_mixtral_v2_ef_signal_it_149
furrutiav
2024-02-23T00:34:42Z
5
0
transformers
[ "transformers", "safetensors", "bert", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
feature-extraction
2024-02-23T00:34:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
quirky-lats-at-mats/toxic_llama
quirky-lats-at-mats
2024-02-23T00:23:19Z
0
0
transformers
[ "transformers", "safetensors", "dataset:Baidicoot/toxic_backdoors_simple", "endpoints_compatible", "region:us" ]
null
2024-02-22T04:36:29Z
--- library_name: transformers datasets: - Baidicoot/toxic_backdoors_simple --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> A [`meta-llama/Llama-2-7b-chat-hf`](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) finetune which becomes extremely toxic when you say you like Manchester United. ## Model Details - Training dataset: [`Baidicoot/toxic_backdoors_simple`](https://huggingface.co/datasets/Baidicoot/toxic_backdoors_simple) - System prompt: `You are a helpful assistant.`
taylorelley/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned-adapters
taylorelley
2024-02-23T00:20:52Z
0
0
peft
[ "peft", "arxiv:1910.09700", "base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16", "base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16", "region:us" ]
null
2024-02-23T00:20:48Z
--- library_name: peft base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.8.2
to100mak/qlora-AjouIphak-polyglot-12.8b-50step
to100mak
2024-02-23T00:09:38Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-23T00:03:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:sg.baeck - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model :polyglot 12.8b ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Jaki01/vagueness-detection-large
Jaki01
2024-02-23T00:04:43Z
5
1
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-23T00:03:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
CultriX/MonaTrix-v5
CultriX
2024-02-23T00:04:06Z
10
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "Kukedlc/NeuralMaxime-7B-slerp", "CultriX/MonaTrix-v4", "eren23/ogno-monarch-jaskier-merge-7b", "base_model:CultriX/MonaTrix-v4", "base_model:merge:CultriX/MonaTrix-v4", "base_model:Kukedlc/NeuralMaxime-7B-slerp", "base_model:merge:Kukedlc/NeuralMaxime-7B-slerp", "base_model:eren23/ogno-monarch-jaskier-merge-7b", "base_model:merge:eren23/ogno-monarch-jaskier-merge-7b", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-22T23:56:10Z
--- tags: - merge - mergekit - lazymergekit - Kukedlc/NeuralMaxime-7B-slerp - CultriX/MonaTrix-v4 - eren23/ogno-monarch-jaskier-merge-7b base_model: - Kukedlc/NeuralMaxime-7B-slerp - CultriX/MonaTrix-v4 - eren23/ogno-monarch-jaskier-merge-7b --- # MonaTrix-v5 MonaTrix-v5 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [Kukedlc/NeuralMaxime-7B-slerp](https://huggingface.co/Kukedlc/NeuralMaxime-7B-slerp) * [CultriX/MonaTrix-v4](https://huggingface.co/CultriX/MonaTrix-v4) * [eren23/ogno-monarch-jaskier-merge-7b](https://huggingface.co/eren23/ogno-monarch-jaskier-merge-7b) ## 🧩 Configuration ```yaml models: - model: bardsai/jaskier-7b-dpo-v5.6 # No parameters necessary for base model - model: Kukedlc/NeuralMaxime-7B-slerp #Emphasize the beginning of Vicuna format models parameters: weight: 0.36 density: 0.65 - model: CultriX/MonaTrix-v4 parameters: weight: 0.34 density: 0.6 # Vicuna format - model: eren23/ogno-monarch-jaskier-merge-7b parameters: weight: 0.3 density: 0.6 merge_method: dare_ties base_model: bardsai/jaskier-7b-dpo-v5.6 parameters: int8_mask: true dtype: bfloat16 random_seed: 0 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "CultriX/MonaTrix-v5" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
316usman/thematic_4b
316usman
2024-02-23T00:02:38Z
1
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "region:us" ]
null
2024-02-23T00:00:45Z
--- library_name: peft tags: - generated_from_trainer base_model: meta-llama/Llama-2-7b-hf model-index: - name: thematic_4b results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # thematic_4b This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1 - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
HighCWu/sd-latent-control-dora-rank128-head3d
HighCWu
2024-02-22T23:58:44Z
6
1
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "image-to-image", "controlnet", "control-lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
image-to-image
2024-02-22T23:53:02Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - image-to-image - diffusers - controlnet - control-lora --- # ControlLoRA - Head3d Version ControlLoRA is a neural network structure extended from Controlnet to control diffusion models by adding extra conditions. This checkpoint corresponds to the ControlLoRA conditioned on Head3d. ControlLoRA uses the same structure as Controlnet. But its core weight comes from UNet, unmodified. Only hint image encoding layers, linear lora layers and conv2d lora layers used in weight offset are trained. The main idea is from my [ControlLoRA](https://github.com/HighCWu/ControlLoRA) and sdxl [control-lora](https://huggingface.co/stabilityai/control-lora). ## Example 1. Clone ControlLoRA from [Github](https://github.com/HighCWu/control-lora-v2): ```sh $ git clone https://github.com/HighCWu/control-lora-v2 ``` 2. Enter the repo dir: ```sh $ cd control-lora-v2 ``` 3. Run code: ```py import torch from PIL import Image from diffusers import StableDiffusionControlNetPipeline, UNet2DConditionModel, UniPCMultistepScheduler from models.control_lora import ControlLoRAModel device = 'cuda' if torch.cuda.is_available() else 'cpu' dtype = torch.float16 if torch.cuda.is_available() else torch.float32 image = Image.open('<Your Conditioning Image Path>') base_model = "runwayml/stable-diffusion-v1-5" unet = UNet2DConditionModel.from_pretrained( base_model, subfolder="unet", torch_dtype=dtype ) control_lora: ControlLoRAModel = ControlLoRAModel.from_pretrained( "HighCWu/sd-latent-control-dora-rank128-head3d", torch_dtype=dtype ) control_lora.tie_weights(unet) pipe = StableDiffusionControlNetPipeline.from_pretrained( base_model, unet=unet, controlnet=control_lora, safety_checker=None, torch_dtype=dtype ).to(device) control_lora.bind_vae(pipe.vae) pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) # Remove if you do not have xformers installed # see https://huggingface.co/docs/diffusers/v0.13.0/en/optimization/xformers#installing-xformers # for installation instructions pipe.enable_xformers_memory_efficient_attention() # pipe.enable_model_cpu_offload() image = pipe("Girl smiling, professional dslr photograph, high quality", image, num_inference_steps=20).images[0] image.show() ``` You can find some example images below. prompt: a photography of a man with a beard and sunglasses on ![images_0)](./images_0.png) prompt: worst quality , low quality , portrait , close - up , inconsistent head shape ![images_1)](./images_1.png) prompt: a photography of a man with a mustache and a suit jacket ![images_2)](./images_2.png)
zhonganl/gpt2
zhonganl
2024-02-22T23:58:22Z
2
0
transformers
[ "transformers", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "4-bit", "gptq", "region:us" ]
text-generation
2024-02-22T22:35:15Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
chkla/parlbert-german-v1
chkla
2024-02-22T23:06:30Z
35
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "de", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-04-24T18:08:46Z
--- language: de widget: - text: Diese Themen gehören nicht ins [MASK]. license: apache-2.0 --- ### Welcome to ParlBERT-German! 🏷 **Model description**: **ParlBERT-German** is a domain-specific language model. The model was created through a process of continuous pre-training, which involved using a generic German language model (GermanBERT) as the foundation and further enhancing it with domain-specific knowledge. We used [DeuParl](https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/2889?show=full) as the domain-specific dataset for continuous pre-training, which provided **ParlBERT-German** with an better understanding of the language and context used in parliamentary debates. The result is a specialized language model that can be used in related scenarios. 🤖 **Model training** During the model training process, a masked language modeling approach was used with a token masking probability of 15\%. The training was performed for a single epoch, which means that the entire dataset was passed through the model once during the training process. 👨‍💻 **Model Use** ```python from transformers import pipeline model = pipeline('fill-mask', model='parlbert-german') model("Diese Themen gehören nicht ins [MASK].") ``` ⚠️ **Limitations** Models are often highly domain dependent. Therefore, the model may perform less well on different domains and text types not included in the training set. 🐦 Twitter: [@chklamm](http://twitter.com/chklamm) ``` @inproceedings{klamm-etal-2022-frameast, title = "{F}rame{AS}t: A Framework for Second-level Agenda Setting in Parliamentary Debates through the Lense of Comparative Agenda Topics", author = "Klamm, Christopher and Rehbein, Ines and Ponzetto, Simone Paolo", editor = "Fi{\v{s}}er, Darja and Eskevich, Maria and Lenardi{\v{c}}, Jakob and de Jong, Franciska", booktitle = "Proceedings of the Workshop ParlaCLARIN III within the 13th Language Resources and Evaluation Conference", month = jun, year = "2022", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://aclanthology.org/2022.parlaclarin-1.13", pages = "92--100", abstract = "This paper presents a framework for studying second-level political agenda setting in parliamentary debates, based on the selection of policy topics used by political actors to discuss a specific issue on the parliamentary agenda. For example, the COVID-19 pandemic as an agenda item can be contextualised as a health issue or as a civil rights issue, as a matter of macroeconomics or can be discussed in the context of social welfare. Our framework allows us to observe differences regarding how different parties discuss the same agenda item by emphasizing different topical aspects of the item. We apply and evaluate our framework on data from the German Bundestag and discuss the merits and limitations of our approach. In addition, we present a new annotated data set of parliamentary debates, following the coding schema of policy topics developed in the Comparative Agendas Project (CAP), and release models for topic classification in parliamentary debates.", } ```
Intel/neural-chat-7b-v3-2
Intel
2024-02-22T22:55:24Z
2,576
57
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "LLMs", "math", "Intel", "en", "dataset:meta-math/MetaMathQA", "arxiv:2309.12284", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-21T10:29:56Z
--- license: apache-2.0 tags: - LLMs - mistral - math - Intel model-index: - name: neural-chat-7b-v3-2 results: - task: type: Large Language Model name: Large Language Model dataset: type: meta-math/MetaMathQA name: meta-math/MetaMathQA metrics: - type: ARC (25-shot) value: 67.49 name: ARC (25-shot) verified: true - type: HellaSwag (10-shot) value: 83.92 name: HellaSwag (10-shot) verified: true - type: MMLU (5-shot) value: 63.55 name: MMLU (5-shot) verified: true - type: TruthfulQA (0-shot) value: 59.68 name: TruthfulQA (0-shot) verified: true - type: Winogrande (5-shot) value: 79.95 name: Winogrande (5-shot) verified: true - type: GSM8K (5-shot) value: 55.12 name: GSM8K (5-shot) verified: true datasets: - meta-math/MetaMathQA language: - en --- ## Model Details: Neural-Chat-v3-2 This model is a fine-tuned 7B parameter LLM on the Intel Gaudi 2 processor from the [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1) on the [meta-math/MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA) dataset. The model was aligned using the Direct Performance Optimization (DPO) method with [Intel/orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs). The [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1) was originally fine-tuned from [mistralai/Mistral-7B-v-0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1). For more information, refer to the Medium article [The Practice of Supervised Fine-tuning and Direct Preference Optimization on Intel Gaudi2](https://medium.com/@NeuralCompressor/the-practice-of-supervised-finetuning-and-direct-preference-optimization-on-habana-gaudi2-a1197d8a3cd3). <p align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/6297f0e30bd2f58c647abb1d/ctASHUT5QYIxMsOFa-sHC.webp" width="500"/> Photo by Google DeepMind on Unsplash </p> | Model Detail | Description | | ----------- | ----------- | | Model Authors - Company | Intel. The NeuralChat team with members from DCAI/AISE/AIPT. Core team members: Kaokao Lv, Liang Lv, Chang Wang, Wenxin Zhang, Xuhui Ren, and Haihao Shen.| | Date | December, 2023 | | Version | v3-2 | | Type | 7B Large Language Model | | Paper or Other Resources | [Medium Blog](https://medium.com/@NeuralCompressor/the-practice-of-supervised-finetuning-and-direct-preference-optimization-on-habana-gaudi2-a1197d8a3cd3) | | License | Apache 2.0 | | Questions or Comments | [Community Tab](https://huggingface.co/Intel/neural-chat-7b-v3-3/discussions) and [Intel Developers Discord](https://discord.gg/rv2Gp55UJQ)| | Intended Use | Description | | ----------- | ----------- | | Primary intended uses | You can use the fine-tuned model for several language-related tasks. Checkout the [LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) to see how this model is doing. | | Primary intended users | Anyone doing inference on language-related tasks. | | Out-of-scope uses | This model in most cases will need to be fine-tuned for your particular task. The model should not be used to intentionally create hostile or alienating environments for people.| ## How To Use Context length for this model: 8192 tokens (same as [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)) ### Reproduce the model Here is the sample code to reproduce the model: [GitHub sample code](https://github.com/intel/intel-extension-for-transformers/blob/main/intel_extension_for_transformers/neural_chat/examples/finetuning/finetune_neuralchat_v3). Here is the documentation to reproduce building the model: ```bash git clone https://github.com/intel/intel-extension-for-transformers.git cd intel-extension-for-transformers docker build --no-cache ./ --target hpu --build-arg REPO=https://github.com/intel/intel-extension-for-transformers.git --build-arg ITREX_VER=main -f ./intel_extension_for_transformers/neural_chat/docker/Dockerfile -t chatbot_finetuning:latest docker run -it --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --net=host --ipc=host chatbot_finetuning:latest # after entering docker container cd examples/finetuning/finetune_neuralchat_v3 ``` We select the latest pretrained mistralai/Mistral-7B-v0.1 and the open source dataset Open-Orca/SlimOrca to conduct the experiment. The below script use deepspeed zero2 to lanuch the training with 8 cards Gaudi2. In the `finetune_neuralchat_v3.py`, the default `use_habana=True, use_lazy_mode=True, device="hpu"` for Gaudi2. And if you want to run it on NVIDIA GPU, you can set them `use_habana=False, use_lazy_mode=False, device="auto"`. ```python deepspeed --include localhost:0,1,2,3,4,5,6,7 \ --master_port 29501 \ finetune_neuralchat_v3.py ``` Merge the LoRA weights: ```python python apply_lora.py \ --base-model-path mistralai/Mistral-7B-v0.1 \ --lora-model-path finetuned_model/ \ --output-path finetuned_model_lora ``` ### Use the model ### FP32 Inference with Transformers ```python import transformers model_name = 'Intel/neural-chat-7b-v3-2' model = transformers.AutoModelForCausalLM.from_pretrained(model_name) tokenizer = transformers.AutoTokenizer.from_pretrained(model_name) def generate_response(system_input, user_input): # Format the input using the provided template prompt = f"### System:\n{system_input}\n### User:\n{user_input}\n### Assistant:\n" # Tokenize and encode the prompt inputs = tokenizer.encode(prompt, return_tensors="pt", add_special_tokens=False) # Generate a response outputs = model.generate(inputs, max_length=1000, num_return_sequences=1) response = tokenizer.decode(outputs[0], skip_special_tokens=True) # Extract only the assistant's response return response.split("### Assistant:\n")[-1] # Example usage system_input = "You are a math expert assistant. Your mission is to help users understand and solve various math problems. You should provide step-by-step solutions, explain reasonings and give the correct answer." user_input = "calculate 100 + 520 + 60" response = generate_response(system_input, user_input) print(response) # expected response """ To calculate the sum of 100, 520, and 60, we will follow these steps: 1. Add the first two numbers: 100 + 520 2. Add the result from step 1 to the third number: (100 + 520) + 60 Step 1: Add 100 and 520 100 + 520 = 620 Step 2: Add the result from step 1 to the third number (60) (620) + 60 = 680 So, the sum of 100, 520, and 60 is 680. """ ``` ### BF16 Inference with Intel Extension for Transformers and Intel Extension for Pytorch ```python from transformers import AutoTokenizer, TextStreamer import torch from intel_extension_for_transformers.transformers import AutoModelForCausalLM import intel_extension_for_pytorch as ipex model_name = "Intel/neural-chat-7b-v3-2" prompt = "Once upon a time, there existed a little girl," tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) inputs = tokenizer(prompt, return_tensors="pt").input_ids streamer = TextStreamer(tokenizer) model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16) model = ipex.optimize(model.eval(), dtype=torch.bfloat16, inplace=True, level="O1", auto_kernel_selection=True) outputs = model.generate(inputs, streamer=streamer, max_new_tokens=300) ``` ### INT4 Inference with Transformers and Intel Extension for Transformers ```python from transformers import AutoTokenizer, TextStreamer from intel_extension_for_transformers.transformers import AutoModelForCausalLM, WeightOnlyQuantConfig model_name = "Intel/neural-chat-7b-v3-2" # for int8, should set weight_dtype="int8" config = WeightOnlyQuantConfig(compute_dtype="bf16", weight_dtype="int4") prompt = "Once upon a time, there existed a little girl," tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) inputs = tokenizer(prompt, return_tensors="pt").input_ids streamer = TextStreamer(tokenizer) model = AutoModelForCausalLM.from_pretrained(model_name, quantization_config=config) outputs = model.generate(inputs, streamer=streamer, max_new_tokens=300) ``` | Factors | Description | | ----------- | ----------- | | Groups | More details about the dataset and annotations can be found at [meta-math/MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA), the project page https://meta-math.github.io/, and the associated paper at https://arxiv.org/abs/2309.12284. | | Instrumentation | The performance of the model can vary depending on the inputs to the model. In this case, the prompts provided can drastically change the prediction of the language model. | | Environment | The model was trained on the Intel Gaudi 2 processor (8 cards). | | Card Prompts | Model deployment on alternate hardware and software will change model performance. The model evaluation factors are from the Hugging Face LLM leaderboard: ARC, HellaSwag, MMLU, TruthfulQA, Winogrande, and GSM8K (see Quantitative Analyses below). | | Metrics | Description | | ----------- | ----------- | | Model performance measures | The model performance was evaluated against other LLMs according to the measures on the LLM leaderboard. These were selected as this has become the standard for LLM performance. | | Decision thresholds | No decision thresholds were used. | | Approaches to uncertainty and variability | - | | Training and Evaluation Data | Description | | ----------- | ----------- | | Datasets | The training data are from [meta-math/MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA), which is augmented from the GSM8k and MATH training sets. There is no contamination from the GSM8k test set, as this was left out during training.| | Motivation | - | | Preprocessing | - | ## Quantitative Analyses The Open LLM Leaderboard results can be found here: [https://huggingface.co/datasets/open-llm-leaderboard/details_Intel__neural-chat-7b-v3-2](https://huggingface.co/datasets/open-llm-leaderboard/details_Intel__neural-chat-7b-v3-2). The metrics came out to: | Metric | Value | |-----------------------|---------------------------| | Avg. | 68.29 | | ARC (25-shot) | 67.49 | | HellaSwag (10-shot) | 83.92 | | MMLU (5-shot) | 63.55 | | TruthfulQA (0-shot) | 59.68 | | Winogrande (5-shot) | 79.95 | | GSM8K (5-shot) | 55.12 | ## Ethical Considerations and Limitations Neural-chat-7b-v3-2 can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Therefore, before deploying any applications of neural-chat-7b-v3-2, developers should perform safety testing. ## Caveats and Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. Here are a couple of useful links to learn more about Intel's AI software: * Intel Neural Compressor [link](https://github.com/intel/neural-compressor) * Intel Extension for Transformers [link](https://github.com/intel/intel-extension-for-transformers) ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
firelily/quick-listing
firelily
2024-02-22T22:33:07Z
10
0
ctranslate2
[ "ctranslate2", "audio", "automatic-speech-recognition", "en", "zh", "de", "es", "ru", "ko", "fr", "ja", "pt", "tr", "pl", "ca", "nl", "ar", "sv", "it", "id", "hi", "fi", "vi", "he", "uk", "el", "ms", "cs", "ro", "da", "hu", "ta", "no", "th", "ur", "hr", "bg", "lt", "la", "mi", "ml", "cy", "sk", "te", "fa", "lv", "bn", "sr", "az", "sl", "kn", "et", "mk", "br", "eu", "is", "hy", "ne", "mn", "bs", "kk", "sq", "sw", "gl", "mr", "pa", "si", "km", "sn", "yo", "so", "af", "oc", "ka", "be", "tg", "sd", "gu", "am", "yi", "lo", "uz", "fo", "ht", "ps", "tk", "nn", "mt", "sa", "lb", "my", "bo", "tl", "mg", "as", "tt", "haw", "ln", "ha", "ba", "jw", "su", "yue", "license:mit", "region:us" ]
automatic-speech-recognition
2024-02-21T15:42:13Z
--- language: - en - zh - de - es - ru - ko - fr - ja - pt - tr - pl - ca - nl - ar - sv - it - id - hi - fi - vi - he - uk - el - ms - cs - ro - da - hu - ta - 'no' - th - ur - hr - bg - lt - la - mi - ml - cy - sk - te - fa - lv - bn - sr - az - sl - kn - et - mk - br - eu - is - hy - ne - mn - bs - kk - sq - sw - gl - mr - pa - si - km - sn - yo - so - af - oc - ka - be - tg - sd - gu - am - yi - lo - uz - fo - ht - ps - tk - nn - mt - sa - lb - my - bo - tl - mg - as - tt - haw - ln - ha - ba - jw - su - yue tags: - audio - automatic-speech-recognition license: mit library_name: ctranslate2 --- # Whisper large-v3 model for CTranslate2 This repository contains the conversion of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format. This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/systran/faster-whisper). ## Example ```python from faster_whisper import WhisperModel model = WhisperModel("large-v3") segments, info = model.transcribe("audio.mp3") for segment in segments: print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text)) ``` ## Conversion details The original model was converted with the following command: ``` ct2-transformers-converter --model openai/whisper-large-v3 --output_dir faster-whisper-large-v3 \ --copy_files tokenizer.json preprocessor_config.json --quantization float16 ``` Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html). ## More information **For more information about the original model, see its [model card](https://huggingface.co/openai/whisper-large-v3).**
AlexxxSem/distilbert-12-classes
AlexxxSem
2024-02-22T22:32:37Z
5
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-22T22:19:50Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall base_model: distilbert-base-uncased model-index: - name: distilbert-12-classes results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-12-classes This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3754 - Accuracy: 0.9266 - F1: 0.9264 - Precision: 0.9349 - Recall: 0.9287 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 2.4155 | 0.96 | 50 | 2.1453 | 0.4432 | 0.3707 | 0.5871 | 0.4659 | | 1.5038 | 1.92 | 100 | 0.7723 | 0.9261 | 0.9238 | 0.9369 | 0.9402 | | 0.4892 | 2.88 | 150 | 0.3246 | 0.9318 | 0.9274 | 0.9356 | 0.9374 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
taoxx060/codeparrot-ds
taoxx060
2024-02-22T22:31:59Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-21T14:55:32Z
--- license: mit base_model: gpt2 tags: - generated_from_trainer model-index: - name: codeparrot-ds results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # codeparrot-ds This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6479 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.4944 | 0.95 | 5000 | 1.6479 | ### Framework versions - Transformers 4.38.1 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
aleksahet/test-push
aleksahet
2024-02-22T22:27:05Z
11
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-22T22:23:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Khadidja22/my_awesome_opus_books_model
Khadidja22
2024-02-22T22:25:53Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-22T22:25:41Z
--- license: apache-2.0 base_model: t5-small tags: - generated_from_trainer metrics: - bleu model-index: - name: my_awesome_opus_books_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_opus_books_model This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6025 - Bleu: 5.6417 - Gen Len: 17.6066 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:| | 1.8642 | 1.0 | 6355 | 1.6253 | 5.4531 | 17.6283 | | 1.8154 | 2.0 | 12710 | 1.6025 | 5.6417 | 17.6066 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
spotify/Mixtral-8x7B-Instruct-v0.1-HIReview-v0.1.2
spotify
2024-02-22T22:10:25Z
0
0
peft
[ "peft", "safetensors", "mixtral", "arxiv:1910.09700", "base_model:mistralai/Mixtral-8x7B-Instruct-v0.1", "base_model:adapter:mistralai/Mixtral-8x7B-Instruct-v0.1", "region:us" ]
null
2024-02-22T21:48:21Z
--- library_name: peft base_model: mistralai/Mixtral-8x7B-Instruct-v0.1 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.8.2
BarraHome/Mistroll-7B-v0.3-4bit
BarraHome
2024-02-22T21:59:31Z
7
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "en", "base_model:BarraHome/Mistroll-7B-v0.2-4bit", "base_model:quantized:BarraHome/Mistroll-7B-v0.2-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-02-22T21:54:25Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: BarraHome/Mistroll-7B-v0.2-4bit --- # Uploaded model - **Developed by:** BarraHome - **License:** apache-2.0 - **Finetuned from model :** BarraHome/Mistroll-7B-v0.2-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Fredbeijixiong/ppo-LunarLander-v2
Fredbeijixiong
2024-02-22T21:58:32Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-02-22T21:58:09Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 214.48 +/- 77.05 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
mcanoglu/Salesforce-codet5p-220m-finetuned-defect-cwe-group
mcanoglu
2024-02-22T21:57:02Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text-classification", "generated_from_trainer", "base_model:Salesforce/codet5p-220m", "base_model:finetune:Salesforce/codet5p-220m", "license:bsd-3-clause", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-02-22T20:19:11Z
--- license: bsd-3-clause base_model: Salesforce/codet5p-220m tags: - generated_from_trainer metrics: - accuracy - precision - recall model-index: - name: Salesforce-codet5p-220m-finetuned-defect-cwe-group results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Salesforce-codet5p-220m-finetuned-defect-cwe-group This model is a fine-tuned version of [Salesforce/codet5p-220m](https://huggingface.co/Salesforce/codet5p-220m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5618 - Accuracy: 0.7428 - Precision: 0.5937 - Recall: 0.4798 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 4711 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:| | No log | 1.0 | 462 | 0.6991 | 0.6911 | 0.6402 | 0.3911 | | 0.803 | 2.0 | 925 | 0.6093 | 0.7192 | 0.6387 | 0.4320 | | 0.6422 | 3.0 | 1387 | 0.5770 | 0.7254 | 0.5693 | 0.4681 | | 0.5365 | 4.0 | 1850 | 0.5672 | 0.7248 | 0.5682 | 0.4721 | | 0.4489 | 4.99 | 2310 | 0.5618 | 0.7428 | 0.5937 | 0.4798 | ### Framework versions - Transformers 4.38.1 - Pytorch 2.2.1+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
BarraHome/Mistroll-7B-v0.3-gguf
BarraHome
2024-02-22T21:53:51Z
5
0
transformers
[ "transformers", "gguf", "mistral", "text-generation-inference", "unsloth", "en", "base_model:BarraHome/Mistroll-7B-v0.2-4bit", "base_model:quantized:BarraHome/Mistroll-7B-v0.2-4bit", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-02-22T21:46:42Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - gguf base_model: BarraHome/Mistroll-7B-v0.2-4bit --- # Uploaded model - **Developed by:** BarraHome - **License:** apache-2.0 - **Finetuned from model :** BarraHome/Mistroll-7B-v0.2-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
kajol/gemma_7b_financial_cls
kajol
2024-02-22T21:42:28Z
2
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google/gemma-7b-it", "base_model:adapter:google/gemma-7b-it", "region:us" ]
null
2024-02-22T21:40:37Z
--- library_name: peft base_model: google/gemma-7b-it --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.8.2
timpal0l/Mistral-7B-v0.1-flashback-v2-instruct
timpal0l
2024-02-22T21:37:59Z
17
4
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "pretrained", "flashback", "web", "conversational", "chat", "sv", "en", "dataset:timpal0l/OpenHermes-2.5-sv", "dataset:teknium/OpenHermes-2.5", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-22T14:57:21Z
--- language: - sv - en license: mit tags: - pretrained - flashback - web - conversational - chat datasets: - timpal0l/OpenHermes-2.5-sv - teknium/OpenHermes-2.5 pipeline_tag: text-generation --- # 🐈‍⬛ Mistral-7B-v0.1-flashback-v2-instruct [Mistral-7B-v0.1-flashback-v2-instruct](https://huggingface.co/timpal0l/Mistral-7B-v0.1-flashback-v2-instruct) is an instruct based version of the base model [timpal0l/Mistral-7B-v0.1-flashback-v2](https://huggingface.co/timpal0l/Mistral-7B-v0.1-flashback-v2). It has been finetuned on a the machine translated instruct dataset [OpenHermes2.5](https://huggingface.co/datasets/timpal0l/OpenHermes-2.5-sv). ## How to use: ```python from transformers import pipeline pipe = pipeline( "text-generation", "timpal0l/Mistral-7B-v0.1-flashback-v2-instruct", device_map="auto" ) text = """ Hur många ägg har jag? Jag hade 10 ägg, sen gav jag bort 5 ägg. Sen fick jag 3 ägg av en kompis. """ generated = pipe(f"USER:{text}ASSISTANT:", max_length=512, temperature=0.6) print(generated[0]["generated_text"].split("ASSISTANT: ")[1:][0]) ``` Output: ```html Du har 8 ägg. Här är resonemanget: 1. Du börjar med 10 ägg 2. Du ger bort 5 ägg, vilket lämnar dig med 10 - 5 = 5 ägg 3. Sedan får du 3 ägg av en kompis, vilket gör att du har 5 + 3 = 8 ägg. ```
HazSylvia/MISTRAL-FINETUNED-ALPACA-xp
HazSylvia
2024-02-22T21:37:34Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-22T21:37:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
juntaoyuan/chemistry-assistant-13b
juntaoyuan
2024-02-22T21:31:26Z
109
5
null
[ "gguf", "chemistry", "teaching assistant", "LlamaEdge", "WasmEdge", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-02-19T02:36:50Z
--- license: apache-2.0 tags: - chemistry - teaching assistant - LlamaEdge - WasmEdge --- This model is fine-tuned from the [llama2-13b-chat](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) base model with an SFT QA dataset generated from [The Elements](https://www.amazon.com/Elements-Visual-Exploration-Every-Universe/dp/1579128149) book. The fine-tuned model has a good understanding and proper focus on chemistry terms, making it a good model for RAG applications for chemistry subjects. The base model is quantized to Q5_K_M and then fined-tuned with the generated QA dataset. The LORA layers are then applied back to the base model. The fine-tuned model has the same number of parameters, quantization, and prompt template as the base model. * Fine-tuned model: [chemistry-assistant-13b-q5_k_m.gguf](https://huggingface.co/juntaoyuan/chemistry-assistant-13b/resolve/main/chemistry-assistant-13b-q5_k_m.gguf?download=true) * Prompt template: same as Llama-2-chat * Base model: [Llama-2-13b-chat-hf-Q5_K_M.gguf](https://huggingface.co/juntaoyuan/chemistry-assistant-13b/resolve/main/Llama-2-13b-chat-hf-Q5_K_M.gguf?download=true) * SFT dataset: [train.txt](https://huggingface.co/juntaoyuan/chemistry-assistant-13b/resolve/main/train.txt?download=true)
guirnd/ppo-LunarLander-v2
guirnd
2024-02-22T21:30:17Z
1
0
stable-baselines3
[ "stable-baselines3", "tensorboard", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-01-19T13:55:10Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 264.64 +/- 19.93 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
adityarra07/whisper-medium-train_noise4
adityarra07
2024-02-22T21:28:16Z
2
0
transformers
[ "transformers", "pytorch", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-medium", "base_model:finetune:openai/whisper-medium", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-02-21T15:18:58Z
--- license: apache-2.0 base_model: openai/whisper-medium tags: - generated_from_trainer metrics: - wer model-index: - name: whisper-medium-train_noise4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-medium-train_noise4 This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0105 - Wer: 2.0416 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.1666 | 1.0 | 2863 | 0.0897 | 6.6866 | | 0.0337 | 2.0 | 5726 | 0.0348 | 3.5587 | | 0.0088 | 3.0 | 8589 | 0.0206 | 2.5098 | | 0.0025 | 4.0 | 11452 | 0.0124 | 2.3038 | | 0.0008 | 5.0 | 14315 | 0.0110 | 1.9667 | | 0.0002 | 6.0 | 17178 | 0.0105 | 2.0416 | ### Framework versions - Transformers 4.33.1 - Pytorch 2.0.1+cu117 - Datasets 2.14.5 - Tokenizers 0.13.3
goxai/LLWM
goxai
2024-02-22T21:21:11Z
4
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-22T20:56:18Z
--- inference: false --- <br> <br> # LWM-Text-1M-Chat Model Card ## Model details **Model type:** LWM-Text-1M-Chat is an open-source model trained from LLaMA-2 on a subset of Books3 filtered data. It is an auto-regressive language model, based on the transformer architecture. **Model date:** LWM-Text-1M-Chat was trained in December 2023. **Paper or resources for more information:** https://largeworldmodel.github.io/ ## License Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved. **Where to send questions or comments about the model:** https://github.com/LargeWorldModel/lwm/issues ## Training dataset - 800 subset of Books3 documents with 1M plus tokens
Keertss/bert-finetuned-ner-model
Keertss
2024-02-22T21:15:50Z
6
0
transformers
[ "transformers", "safetensors", "bert", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-02-22T21:15:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hari31416/RAGOptimize_Adapter
hari31416
2024-02-22T21:14:54Z
0
0
transformers
[ "transformers", "safetensors", "text-generation", "arxiv:1910.09700", "license:mit", "endpoints_compatible", "region:us" ]
text-generation
2024-02-21T09:16:17Z
--- license: mit library_name: transformers pipeline_tag: text-generation --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mi-rei/Llama-2-7b-CT_brief_full_dataset
mi-rei
2024-02-22T21:13:01Z
1
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "dataset:mi-rei/CT_brief_full", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-22T17:49:57Z
--- datasets: - mi-rei/CT_brief_full --- Trained for 1 epoch\ \ Accuracy: 0.492\ F1 Score: 0.516\ Accuracy for label 0: 0.437\ Accuracy for label 1: 0.548 Classification Report: | | precision | recall | f1-score | support | |--------------|-----------|--------|----------|---------| | 0 | 0.50 | 0.44 | 0.47 | 382 | | 1 | 0.49 | 0.55 | 0.52 | 372 | | accuracy | | | 0.49 | 754 | | macro avg | 0.49 | 0.49 | 0.49 | 754 | | weighted avg | 0.49 | 0.49 | 0.49 | 754 | Confusion Matrix:\ [[167 215 0]\ [168 204 0]\ [ 0 0 0]]
pjbhaumik/crossencoder-airline-refine-010
pjbhaumik
2024-02-22T21:09:46Z
6
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:cross-encoder/stsb-roberta-large", "base_model:finetune:cross-encoder/stsb-roberta-large", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-22T21:09:09Z
--- license: apache-2.0 base_model: cross-encoder/stsb-roberta-large tags: - generated_from_trainer model-index: - name: crossencoder-airline-refine-010 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # crossencoder-airline-refine-010 This model is a fine-tuned version of [cross-encoder/stsb-roberta-large](https://huggingface.co/cross-encoder/stsb-roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 8.0523 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-08 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 500 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 15.341 | 1.0 | 157 | 14.5631 | | 12.2879 | 2.0 | 314 | 13.3058 | | 12.5681 | 3.0 | 471 | 11.4717 | | 12.8002 | 4.0 | 628 | 9.8398 | | 10.1409 | 5.0 | 785 | 8.8337 | | 9.4818 | 6.0 | 942 | 8.1771 | | 9.277 | 7.0 | 1099 | 7.7594 | | 9.2643 | 8.0 | 1256 | 7.5311 | | 8.7124 | 9.0 | 1413 | 7.4428 | | 8.9775 | 10.0 | 1570 | 7.4347 | ### Framework versions - Transformers 4.38.1 - Pytorch 2.0.1 - Datasets 2.17.1 - Tokenizers 0.15.2
christinacdl/XLM_RoBERTa-Clickbait-Detection-NEW-Data
christinacdl
2024-02-22T21:08:45Z
5
1
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-22T15:49:45Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: XLM_RoBERTa-Clickbait-Detection-NEW-Data results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # XLM_RoBERTa-Clickbait-Detection-NEW-Data This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4668 - Micro F1: 0.9032 - Macro F1: 0.8997 - Accuracy: 0.9032 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.36.1 - Pytorch 2.1.0+cu121 - Datasets 2.13.1 - Tokenizers 0.15.0
glacio-dev/Qwen1.5-4B-Chat-Q4
glacio-dev
2024-02-22T21:08:30Z
5
0
mlx
[ "mlx", "safetensors", "qwen2", "chat", "text-generation", "conversational", "en", "license:other", "region:us" ]
text-generation
2024-02-22T20:50:35Z
--- language: - en license: other tags: - chat - mlx license_name: tongyi-qianwen-research license_link: https://huggingface.co/Qwen/Qwen1.5-4B-Chat/blob/main/LICENSE pipeline_tag: text-generation --- # glacio-dev/Qwen1.5-4B-Chat-Q4 This model was converted to MLX format from [`Qwen/Qwen1.5-4B-Chat`](). Refer to the [original model card](https://huggingface.co/Qwen/Qwen1.5-4B-Chat) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("glacio-dev/Qwen1.5-4B-Chat-Q4") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
peldrak/segformer-b4-ade-512-512-finetuned-coastTrain
peldrak
2024-02-22T21:02:17Z
187
0
transformers
[ "transformers", "pytorch", "segformer", "vision", "image-segmentation", "generated_from_trainer", "base_model:nvidia/segformer-b4-finetuned-ade-512-512", "base_model:finetune:nvidia/segformer-b4-finetuned-ade-512-512", "license:other", "endpoints_compatible", "region:us" ]
image-segmentation
2024-02-22T14:08:58Z
--- license: other base_model: nvidia/segformer-b4-finetuned-ade-512-512 tags: - vision - image-segmentation - generated_from_trainer model-index: - name: segformer-b4-ade-512-512-finetuned-coastTrain results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # segformer-b4-ade-512-512-finetuned-coastTrain This model is a fine-tuned version of [nvidia/segformer-b4-finetuned-ade-512-512](https://huggingface.co/nvidia/segformer-b4-finetuned-ade-512-512) on the peldrak/coastTrain_512-512 dataset. It achieves the following results on the evaluation set: - Loss: 0.5503 - Mean Iou: 0.7259 - Mean Accuracy: 0.8239 - Overall Accuracy: 0.8905 - Accuracy Water: 0.9420 - Accuracy Whitewater: 0.8275 - Accuracy Sediment: 0.8697 - Accuracy Other Natural Terrain: 0.5254 - Accuracy Vegetation: 0.9118 - Accuracy Development: 0.8725 - Accuracy Unknown: 0.8182 - Iou Water: 0.8743 - Iou Whitewater: 0.7005 - Iou Sediment: 0.7725 - Iou Other Natural Terrain: 0.4188 - Iou Vegetation: 0.8159 - Iou Development: 0.7204 - Iou Unknown: 0.7786 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 25 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Water | Accuracy Whitewater | Accuracy Sediment | Accuracy Other Natural Terrain | Accuracy Vegetation | Accuracy Development | Accuracy Unknown | Iou Water | Iou Whitewater | Iou Sediment | Iou Other Natural Terrain | Iou Vegetation | Iou Development | Iou Unknown | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:--------------:|:-------------------:|:-----------------:|:------------------------------:|:-------------------:|:--------------------:|:----------------:|:---------:|:--------------:|:------------:|:-------------------------:|:--------------:|:---------------:|:-----------:| | 1.7996 | 0.05 | 20 | 1.6618 | 0.1385 | 0.2334 | 0.4197 | 0.2605 | 0.0017 | 0.0998 | 0.0001 | 0.9645 | 0.0257 | 0.2812 | 0.2371 | 0.0017 | 0.0706 | 0.0001 | 0.3546 | 0.0243 | 0.2812 | | 1.6351 | 0.11 | 40 | 1.3796 | 0.2802 | 0.3790 | 0.6323 | 0.7576 | 0.0019 | 0.3713 | 0.0001 | 0.9269 | 0.2917 | 0.3035 | 0.6296 | 0.0019 | 0.2979 | 0.0001 | 0.4993 | 0.2290 | 0.3033 | | 1.3244 | 0.16 | 60 | 1.1775 | 0.2816 | 0.3728 | 0.6589 | 0.8328 | 0.0278 | 0.2458 | 0.0 | 0.9754 | 0.1749 | 0.3532 | 0.7031 | 0.0278 | 0.2039 | 0.0 | 0.5275 | 0.1594 | 0.3491 | | 1.141 | 0.22 | 80 | 1.0487 | 0.3248 | 0.4152 | 0.6952 | 0.8715 | 0.0146 | 0.2582 | 0.0 | 0.9688 | 0.3943 | 0.3991 | 0.7104 | 0.0146 | 0.2183 | 0.0 | 0.5762 | 0.3599 | 0.3942 | | 1.2046 | 0.27 | 100 | 0.9807 | 0.3916 | 0.5008 | 0.7341 | 0.8654 | 0.0558 | 0.6314 | 0.0 | 0.9231 | 0.6309 | 0.3992 | 0.7483 | 0.0556 | 0.4279 | 0.0 | 0.6178 | 0.4942 | 0.3973 | | 0.8813 | 0.32 | 120 | 0.9001 | 0.4088 | 0.5210 | 0.7502 | 0.8914 | 0.0256 | 0.6221 | 0.0 | 0.8907 | 0.7535 | 0.4639 | 0.7625 | 0.0256 | 0.4717 | 0.0 | 0.6289 | 0.5149 | 0.4577 | | 1.1054 | 0.38 | 140 | 0.8345 | 0.4071 | 0.5236 | 0.7503 | 0.9017 | 0.0017 | 0.6190 | 0.0 | 0.8647 | 0.8093 | 0.4685 | 0.7541 | 0.0017 | 0.4967 | 0.0 | 0.6349 | 0.5003 | 0.4621 | | 1.33 | 0.43 | 160 | 0.8624 | 0.4254 | 0.5412 | 0.7580 | 0.8682 | 0.0640 | 0.6765 | 0.0 | 0.9082 | 0.8024 | 0.4692 | 0.7719 | 0.0639 | 0.5188 | 0.0 | 0.6361 | 0.5249 | 0.4623 | | 0.9514 | 0.49 | 180 | 0.7666 | 0.4413 | 0.5632 | 0.7751 | 0.8997 | 0.0773 | 0.7375 | 0.0 | 0.8850 | 0.8480 | 0.4949 | 0.7864 | 0.0770 | 0.5149 | 0.0 | 0.6838 | 0.5424 | 0.4848 | | 0.9908 | 0.54 | 200 | 0.7238 | 0.4625 | 0.5783 | 0.7808 | 0.9173 | 0.1737 | 0.7531 | 0.0 | 0.8696 | 0.8350 | 0.4994 | 0.7847 | 0.1722 | 0.5914 | 0.0 | 0.6776 | 0.5230 | 0.4888 | | 0.6507 | 0.59 | 220 | 0.7059 | 0.4730 | 0.5939 | 0.7842 | 0.9030 | 0.2181 | 0.8172 | 0.0 | 0.8698 | 0.8533 | 0.4961 | 0.7760 | 0.2161 | 0.5692 | 0.0 | 0.6904 | 0.5716 | 0.4877 | | 0.8612 | 0.65 | 240 | 0.6863 | 0.4878 | 0.6020 | 0.7871 | 0.9168 | 0.2972 | 0.8751 | 0.0 | 0.8585 | 0.7589 | 0.5076 | 0.7607 | 0.2871 | 0.5770 | 0.0 | 0.6964 | 0.5970 | 0.4963 | | 0.817 | 0.7 | 260 | 0.6994 | 0.4870 | 0.5971 | 0.7853 | 0.9158 | 0.3467 | 0.7023 | 0.0 | 0.8749 | 0.7940 | 0.5462 | 0.7684 | 0.3327 | 0.4967 | 0.0 | 0.6952 | 0.5854 | 0.5306 | | 0.7013 | 0.76 | 280 | 0.7039 | 0.5090 | 0.6147 | 0.7952 | 0.8864 | 0.4188 | 0.8107 | 0.0 | 0.9333 | 0.7519 | 0.5019 | 0.7943 | 0.3898 | 0.6044 | 0.0 | 0.6824 | 0.6012 | 0.4910 | | 0.5296 | 0.81 | 300 | 0.6780 | 0.5184 | 0.6354 | 0.7942 | 0.9253 | 0.6229 | 0.6595 | 0.0 | 0.8711 | 0.8447 | 0.5244 | 0.7922 | 0.5030 | 0.5563 | 0.0 | 0.6795 | 0.5932 | 0.5049 | | 1.9473 | 0.86 | 320 | 0.6378 | 0.5484 | 0.6550 | 0.8145 | 0.9153 | 0.5907 | 0.7930 | 0.0 | 0.9067 | 0.8167 | 0.5625 | 0.8053 | 0.5191 | 0.6401 | 0.0 | 0.7114 | 0.6272 | 0.5354 | | 0.6526 | 0.92 | 340 | 0.6640 | 0.5198 | 0.6270 | 0.8036 | 0.9080 | 0.4719 | 0.7448 | 0.0 | 0.9251 | 0.8294 | 0.5101 | 0.8101 | 0.4285 | 0.6169 | 0.0 | 0.6985 | 0.5846 | 0.5003 | | 0.6158 | 0.97 | 360 | 0.6036 | 0.5635 | 0.6660 | 0.8236 | 0.9305 | 0.5957 | 0.8159 | 0.0 | 0.8971 | 0.8425 | 0.5806 | 0.8109 | 0.5478 | 0.6469 | 0.0 | 0.7266 | 0.6656 | 0.5467 | | 0.6889 | 1.03 | 380 | 0.6122 | 0.5689 | 0.6857 | 0.8245 | 0.9078 | 0.6870 | 0.8587 | 0.0 | 0.8972 | 0.8718 | 0.5773 | 0.8265 | 0.5679 | 0.6732 | 0.0 | 0.7144 | 0.6527 | 0.5475 | | 0.8398 | 1.08 | 400 | 0.6046 | 0.5639 | 0.6609 | 0.8259 | 0.9239 | 0.6264 | 0.8207 | 0.0 | 0.9428 | 0.7562 | 0.5564 | 0.8154 | 0.5376 | 0.6656 | 0.0 | 0.7300 | 0.6566 | 0.5419 | | 0.5525 | 1.14 | 420 | 0.5844 | 0.5614 | 0.6918 | 0.8149 | 0.9346 | 0.6915 | 0.8565 | 0.0001 | 0.7892 | 0.9305 | 0.6401 | 0.8060 | 0.5956 | 0.6503 | 0.0001 | 0.7033 | 0.5758 | 0.5986 | | 0.4518 | 1.19 | 440 | 0.5928 | 0.5694 | 0.6943 | 0.8232 | 0.8681 | 0.6488 | 0.8886 | 0.0001 | 0.8696 | 0.8152 | 0.7697 | 0.7924 | 0.5603 | 0.5927 | 0.0001 | 0.7465 | 0.6463 | 0.6476 | | 0.3196 | 1.24 | 460 | 0.6074 | 0.5595 | 0.6550 | 0.8218 | 0.9200 | 0.5712 | 0.8242 | 0.0 | 0.9328 | 0.7890 | 0.5480 | 0.8181 | 0.5260 | 0.6730 | 0.0 | 0.7156 | 0.6673 | 0.5162 | | 0.5027 | 1.3 | 480 | 0.5926 | 0.5682 | 0.6860 | 0.8250 | 0.9250 | 0.7513 | 0.8576 | 0.0 | 0.9062 | 0.8556 | 0.5061 | 0.8217 | 0.5813 | 0.6973 | 0.0 | 0.7260 | 0.6568 | 0.4940 | | 0.6623 | 1.35 | 500 | 0.5957 | 0.5612 | 0.6748 | 0.8241 | 0.9321 | 0.6650 | 0.8496 | 0.0 | 0.9014 | 0.8574 | 0.5178 | 0.8284 | 0.5701 | 0.6787 | 0.0 | 0.7315 | 0.6148 | 0.5048 | | 0.4123 | 1.41 | 520 | 0.5802 | 0.5710 | 0.6924 | 0.8242 | 0.9141 | 0.7787 | 0.8525 | 0.0 | 0.9029 | 0.8788 | 0.5197 | 0.8244 | 0.6246 | 0.6937 | 0.0 | 0.7247 | 0.6218 | 0.5080 | | 0.3567 | 1.46 | 540 | 0.5760 | 0.5750 | 0.6763 | 0.8265 | 0.9232 | 0.7055 | 0.8190 | 0.0002 | 0.9256 | 0.8169 | 0.5436 | 0.8258 | 0.6074 | 0.6963 | 0.0002 | 0.7149 | 0.6564 | 0.5244 | | 0.3404 | 1.51 | 560 | 0.5725 | 0.5788 | 0.6787 | 0.8325 | 0.9322 | 0.6995 | 0.8212 | 0.0021 | 0.9249 | 0.7782 | 0.5930 | 0.8140 | 0.5872 | 0.6826 | 0.0021 | 0.7438 | 0.6502 | 0.5719 | | 0.3542 | 1.57 | 580 | 0.5759 | 0.5872 | 0.6949 | 0.8336 | 0.9302 | 0.7720 | 0.8318 | 0.0004 | 0.9124 | 0.8626 | 0.5549 | 0.8286 | 0.6374 | 0.7063 | 0.0004 | 0.7300 | 0.6646 | 0.5429 | | 0.5647 | 1.62 | 600 | 0.5635 | 0.5926 | 0.7132 | 0.8380 | 0.9124 | 0.8427 | 0.8749 | 0.0003 | 0.9089 | 0.8359 | 0.6171 | 0.8388 | 0.6198 | 0.6904 | 0.0003 | 0.7343 | 0.6800 | 0.5846 | | 0.342 | 1.68 | 620 | 0.5616 | 0.5793 | 0.7065 | 0.8285 | 0.9150 | 0.8240 | 0.8075 | 0.0 | 0.8730 | 0.8915 | 0.6343 | 0.8351 | 0.6019 | 0.6871 | 0.0 | 0.7122 | 0.6332 | 0.5861 | | 0.4183 | 1.73 | 640 | 0.5514 | 0.5959 | 0.6992 | 0.8388 | 0.9208 | 0.7985 | 0.8213 | 0.0014 | 0.9300 | 0.7947 | 0.6275 | 0.8396 | 0.6423 | 0.6872 | 0.0014 | 0.7278 | 0.6710 | 0.6018 | | 1.0677 | 1.78 | 660 | 0.5618 | 0.5915 | 0.6959 | 0.8370 | 0.9145 | 0.7630 | 0.8128 | 0.0 | 0.9274 | 0.8135 | 0.6402 | 0.8335 | 0.6267 | 0.6889 | 0.0 | 0.7329 | 0.6622 | 0.5965 | | 0.5682 | 1.84 | 680 | 0.5212 | 0.6035 | 0.7052 | 0.8464 | 0.9383 | 0.7473 | 0.8452 | 0.0028 | 0.9098 | 0.8419 | 0.6513 | 0.8326 | 0.6173 | 0.6857 | 0.0028 | 0.7524 | 0.6972 | 0.6363 | | 0.4499 | 1.89 | 700 | 0.5389 | 0.6082 | 0.7131 | 0.8436 | 0.9029 | 0.7818 | 0.8322 | 0.0123 | 0.9274 | 0.8649 | 0.6701 | 0.8398 | 0.6534 | 0.6920 | 0.0123 | 0.7300 | 0.6789 | 0.6509 | | 0.737 | 1.95 | 720 | 0.5390 | 0.5997 | 0.7012 | 0.8387 | 0.8985 | 0.7270 | 0.8226 | 0.0193 | 0.9179 | 0.7850 | 0.7382 | 0.8268 | 0.6141 | 0.6959 | 0.0193 | 0.7329 | 0.6829 | 0.6258 | | 1.8862 | 2.0 | 740 | 0.5632 | 0.5918 | 0.7112 | 0.8369 | 0.9257 | 0.8057 | 0.8401 | 0.0134 | 0.8963 | 0.9235 | 0.5735 | 0.8326 | 0.6244 | 0.7105 | 0.0134 | 0.7381 | 0.6613 | 0.5624 | | 0.3969 | 2.05 | 760 | 0.5738 | 0.5775 | 0.6803 | 0.8329 | 0.9267 | 0.7258 | 0.8365 | 0.0115 | 0.9479 | 0.7698 | 0.5435 | 0.8327 | 0.5740 | 0.7178 | 0.0115 | 0.7364 | 0.6421 | 0.5283 | | 0.4485 | 2.11 | 780 | 0.5115 | 0.5808 | 0.7162 | 0.8347 | 0.9213 | 0.8359 | 0.8539 | 0.0257 | 0.8887 | 0.9209 | 0.5672 | 0.8416 | 0.5746 | 0.7114 | 0.0256 | 0.7521 | 0.6147 | 0.5459 | | 0.4601 | 2.16 | 800 | 0.4928 | 0.6289 | 0.7265 | 0.8589 | 0.9338 | 0.7416 | 0.7906 | 0.0709 | 0.9098 | 0.8631 | 0.7759 | 0.8394 | 0.6178 | 0.7066 | 0.0707 | 0.7757 | 0.6895 | 0.7030 | | 1.3914 | 2.22 | 820 | 0.4974 | 0.6289 | 0.7243 | 0.8595 | 0.9320 | 0.7422 | 0.8269 | 0.0654 | 0.9166 | 0.8002 | 0.7869 | 0.8399 | 0.6236 | 0.7058 | 0.0652 | 0.7781 | 0.6824 | 0.7073 | | 0.2324 | 2.27 | 840 | 0.4771 | 0.6282 | 0.7488 | 0.8591 | 0.9165 | 0.8484 | 0.8886 | 0.0381 | 0.8783 | 0.8598 | 0.8117 | 0.8444 | 0.6400 | 0.6924 | 0.0380 | 0.7777 | 0.6837 | 0.7214 | | 0.6388 | 2.32 | 860 | 0.4670 | 0.6300 | 0.7361 | 0.8591 | 0.9277 | 0.8268 | 0.8403 | 0.0739 | 0.9251 | 0.8473 | 0.7111 | 0.8454 | 0.6490 | 0.7112 | 0.0731 | 0.7831 | 0.6738 | 0.6743 | | 0.2188 | 2.38 | 880 | 0.4928 | 0.6232 | 0.7423 | 0.8471 | 0.9248 | 0.8602 | 0.8602 | 0.2115 | 0.9274 | 0.8505 | 0.5618 | 0.8433 | 0.6290 | 0.7404 | 0.2050 | 0.7739 | 0.6265 | 0.5444 | | 0.7092 | 2.43 | 900 | 0.4844 | 0.6309 | 0.7335 | 0.8565 | 0.9195 | 0.7766 | 0.8661 | 0.1365 | 0.9419 | 0.8232 | 0.6709 | 0.8537 | 0.6358 | 0.7335 | 0.1323 | 0.7738 | 0.6501 | 0.6374 | | 0.7643 | 2.49 | 920 | 0.4768 | 0.6334 | 0.7428 | 0.8560 | 0.9253 | 0.7818 | 0.8491 | 0.1362 | 0.8976 | 0.8848 | 0.7245 | 0.8473 | 0.6349 | 0.7250 | 0.1338 | 0.7687 | 0.6468 | 0.6771 | | 0.3122 | 2.54 | 940 | 0.4602 | 0.6392 | 0.7361 | 0.8620 | 0.9343 | 0.7936 | 0.8124 | 0.0915 | 0.9176 | 0.8384 | 0.7647 | 0.8443 | 0.6415 | 0.7129 | 0.0901 | 0.7736 | 0.6969 | 0.7151 | | 0.4749 | 2.59 | 960 | 0.5159 | 0.6405 | 0.7421 | 0.8649 | 0.9295 | 0.8029 | 0.8561 | 0.0751 | 0.9145 | 0.8401 | 0.7769 | 0.8443 | 0.6460 | 0.7174 | 0.0742 | 0.7883 | 0.6921 | 0.7210 | | 0.2705 | 2.65 | 980 | 0.5420 | 0.6110 | 0.7160 | 0.8431 | 0.9321 | 0.8193 | 0.8397 | 0.1128 | 0.9387 | 0.8182 | 0.5512 | 0.8466 | 0.6527 | 0.7419 | 0.1109 | 0.7489 | 0.6416 | 0.5345 | | 0.4277 | 2.7 | 1000 | 0.5636 | 0.5942 | 0.7261 | 0.8374 | 0.9211 | 0.8345 | 0.8916 | 0.0744 | 0.8874 | 0.9127 | 0.5612 | 0.8470 | 0.6492 | 0.6832 | 0.0737 | 0.7656 | 0.6105 | 0.5300 | | 0.3361 | 2.76 | 1020 | 0.5560 | 0.5980 | 0.7126 | 0.8379 | 0.9340 | 0.8012 | 0.8863 | 0.1015 | 0.9195 | 0.8318 | 0.5139 | 0.8424 | 0.6616 | 0.6936 | 0.1003 | 0.7576 | 0.6233 | 0.5075 | | 0.2131 | 2.81 | 1040 | 0.4922 | 0.6332 | 0.7542 | 0.8464 | 0.9243 | 0.8445 | 0.8604 | 0.2245 | 0.8791 | 0.9215 | 0.6252 | 0.8423 | 0.6536 | 0.7311 | 0.2194 | 0.7509 | 0.6570 | 0.5780 | | 0.4304 | 2.86 | 1060 | 0.4797 | 0.6432 | 0.7414 | 0.8561 | 0.9357 | 0.8259 | 0.8261 | 0.1822 | 0.9262 | 0.8392 | 0.6541 | 0.8442 | 0.6658 | 0.7358 | 0.1796 | 0.7624 | 0.6887 | 0.6260 | | 0.3788 | 2.92 | 1080 | 0.4500 | 0.6428 | 0.7568 | 0.8583 | 0.9292 | 0.8807 | 0.8560 | 0.1840 | 0.9145 | 0.8782 | 0.6547 | 0.8469 | 0.6292 | 0.7258 | 0.1791 | 0.7699 | 0.7106 | 0.6380 | | 0.3109 | 2.97 | 1100 | 0.4402 | 0.6453 | 0.7467 | 0.8584 | 0.9378 | 0.7490 | 0.8789 | 0.2150 | 0.9112 | 0.8963 | 0.6385 | 0.8503 | 0.6480 | 0.7422 | 0.2072 | 0.7725 | 0.6728 | 0.6245 | | 0.313 | 3.03 | 1120 | 0.4730 | 0.6509 | 0.7636 | 0.8611 | 0.9233 | 0.8228 | 0.9004 | 0.2175 | 0.9036 | 0.8874 | 0.6900 | 0.8466 | 0.6430 | 0.7358 | 0.1968 | 0.7787 | 0.6857 | 0.6697 | | 0.5267 | 3.08 | 1140 | 0.4381 | 0.6790 | 0.7837 | 0.8731 | 0.9227 | 0.8276 | 0.8415 | 0.2900 | 0.9056 | 0.8945 | 0.8039 | 0.8552 | 0.6517 | 0.7441 | 0.2698 | 0.7864 | 0.6791 | 0.7669 | | 0.6162 | 3.14 | 1160 | 0.4643 | 0.6700 | 0.7670 | 0.8663 | 0.9274 | 0.8280 | 0.8118 | 0.2924 | 0.9304 | 0.8529 | 0.7259 | 0.8506 | 0.6576 | 0.7353 | 0.2801 | 0.7738 | 0.6960 | 0.6968 | | 0.3309 | 3.19 | 1180 | 0.4844 | 0.6540 | 0.7608 | 0.8618 | 0.9289 | 0.8232 | 0.8624 | 0.2114 | 0.9033 | 0.8873 | 0.7092 | 0.8458 | 0.6655 | 0.7206 | 0.1999 | 0.7761 | 0.6856 | 0.6845 | | 0.2346 | 3.24 | 1200 | 0.4521 | 0.6686 | 0.7638 | 0.8681 | 0.9423 | 0.7894 | 0.8419 | 0.2557 | 0.9073 | 0.8754 | 0.7347 | 0.8413 | 0.6693 | 0.7318 | 0.2380 | 0.7906 | 0.7006 | 0.7088 | | 0.2851 | 3.3 | 1220 | 0.4731 | 0.6556 | 0.7634 | 0.8647 | 0.9003 | 0.6759 | 0.8933 | 0.2520 | 0.8915 | 0.8734 | 0.8572 | 0.8400 | 0.5774 | 0.7224 | 0.2397 | 0.7908 | 0.7053 | 0.7138 | | 0.293 | 3.35 | 1240 | 0.4126 | 0.6998 | 0.7967 | 0.8843 | 0.9266 | 0.8391 | 0.8298 | 0.3422 | 0.9229 | 0.8507 | 0.8652 | 0.8573 | 0.6491 | 0.7313 | 0.3177 | 0.8105 | 0.7219 | 0.8111 | | 0.839 | 3.41 | 1260 | 0.4382 | 0.6837 | 0.7752 | 0.8793 | 0.9373 | 0.7954 | 0.8550 | 0.2738 | 0.9309 | 0.8478 | 0.7862 | 0.8548 | 0.6647 | 0.7335 | 0.2611 | 0.8145 | 0.7042 | 0.7530 | | 0.5775 | 3.46 | 1280 | 0.4951 | 0.6536 | 0.7549 | 0.8610 | 0.9445 | 0.7814 | 0.9085 | 0.2600 | 0.9057 | 0.8165 | 0.6675 | 0.8380 | 0.6630 | 0.7178 | 0.2207 | 0.7940 | 0.6989 | 0.6427 | | 0.3429 | 3.51 | 1300 | 0.4591 | 0.6826 | 0.7888 | 0.8713 | 0.9157 | 0.8246 | 0.8578 | 0.3419 | 0.9111 | 0.9014 | 0.7690 | 0.8469 | 0.6664 | 0.7445 | 0.3085 | 0.7920 | 0.6777 | 0.7419 | | 0.2416 | 3.57 | 1320 | 0.4521 | 0.6707 | 0.7700 | 0.8722 | 0.9411 | 0.8200 | 0.8904 | 0.2413 | 0.9090 | 0.8340 | 0.7542 | 0.8469 | 0.6624 | 0.7226 | 0.2227 | 0.8006 | 0.7015 | 0.7381 | | 0.4017 | 3.62 | 1340 | 0.4673 | 0.6726 | 0.7685 | 0.8720 | 0.9368 | 0.7864 | 0.8589 | 0.2893 | 0.9194 | 0.8137 | 0.7752 | 0.8509 | 0.6550 | 0.7348 | 0.2628 | 0.8004 | 0.6654 | 0.7386 | | 0.1852 | 3.68 | 1360 | 0.4635 | 0.6838 | 0.7842 | 0.8742 | 0.9347 | 0.8221 | 0.8274 | 0.3451 | 0.9135 | 0.8714 | 0.7755 | 0.8598 | 0.6648 | 0.7542 | 0.3115 | 0.7953 | 0.6643 | 0.7366 | | 0.6558 | 3.73 | 1380 | 0.5199 | 0.6479 | 0.7609 | 0.8560 | 0.9454 | 0.8007 | 0.8875 | 0.3075 | 0.8921 | 0.8780 | 0.6153 | 0.8552 | 0.6688 | 0.7489 | 0.2787 | 0.7888 | 0.6062 | 0.5883 | | 0.3409 | 3.78 | 1400 | 0.4640 | 0.6676 | 0.7790 | 0.8633 | 0.9285 | 0.7885 | 0.8312 | 0.4431 | 0.9145 | 0.8346 | 0.7125 | 0.8583 | 0.6600 | 0.7396 | 0.3493 | 0.8009 | 0.6203 | 0.6446 | | 0.2649 | 3.84 | 1420 | 0.5892 | 0.6478 | 0.7655 | 0.8511 | 0.9392 | 0.7808 | 0.8388 | 0.4182 | 0.8934 | 0.8889 | 0.5991 | 0.8528 | 0.6674 | 0.7215 | 0.3482 | 0.7895 | 0.5867 | 0.5687 | | 0.4681 | 3.89 | 1440 | 0.4774 | 0.6900 | 0.7848 | 0.8785 | 0.9377 | 0.8266 | 0.8489 | 0.3595 | 0.9327 | 0.8117 | 0.7766 | 0.8532 | 0.6745 | 0.7194 | 0.3210 | 0.8169 | 0.7024 | 0.7428 | | 0.7559 | 3.95 | 1460 | 0.4725 | 0.6771 | 0.7733 | 0.8704 | 0.9352 | 0.8051 | 0.8298 | 0.3240 | 0.9215 | 0.8545 | 0.7427 | 0.8539 | 0.6685 | 0.7301 | 0.3056 | 0.7896 | 0.6841 | 0.7077 | | 0.3047 | 4.0 | 1480 | 0.4709 | 0.6773 | 0.7762 | 0.8686 | 0.9153 | 0.7712 | 0.8651 | 0.3327 | 0.9098 | 0.8260 | 0.8129 | 0.8451 | 0.6611 | 0.7425 | 0.3060 | 0.7887 | 0.6874 | 0.7101 | | 0.2191 | 4.05 | 1500 | 0.4516 | 0.6901 | 0.7932 | 0.8757 | 0.9385 | 0.8364 | 0.8641 | 0.3807 | 0.9018 | 0.8517 | 0.7796 | 0.8488 | 0.6782 | 0.7423 | 0.3259 | 0.8072 | 0.6781 | 0.7503 | | 0.3001 | 4.11 | 1520 | 0.4885 | 0.6808 | 0.7846 | 0.8677 | 0.9408 | 0.8009 | 0.8662 | 0.4246 | 0.9019 | 0.8494 | 0.7082 | 0.8505 | 0.6773 | 0.7501 | 0.3428 | 0.7870 | 0.6753 | 0.6823 | | 0.4489 | 4.16 | 1540 | 0.4531 | 0.6884 | 0.7905 | 0.8779 | 0.9243 | 0.8295 | 0.8128 | 0.3443 | 0.9181 | 0.8716 | 0.8325 | 0.8591 | 0.6756 | 0.7198 | 0.2901 | 0.8047 | 0.6798 | 0.7896 | | 0.3684 | 4.22 | 1560 | 0.4794 | 0.6871 | 0.7968 | 0.8723 | 0.9387 | 0.8436 | 0.8621 | 0.4160 | 0.8944 | 0.8819 | 0.7407 | 0.8562 | 0.6723 | 0.7477 | 0.3495 | 0.7952 | 0.6789 | 0.7101 | | 0.5706 | 4.27 | 1580 | 0.5364 | 0.6783 | 0.7810 | 0.8687 | 0.9245 | 0.8144 | 0.8753 | 0.3755 | 0.9310 | 0.8559 | 0.6905 | 0.8558 | 0.6808 | 0.7445 | 0.3183 | 0.7911 | 0.6939 | 0.6638 | | 0.1208 | 4.32 | 1600 | 0.4386 | 0.7019 | 0.7988 | 0.8822 | 0.9263 | 0.8024 | 0.8679 | 0.3790 | 0.9054 | 0.8363 | 0.8744 | 0.8585 | 0.6713 | 0.7511 | 0.3355 | 0.8077 | 0.7271 | 0.7618 | | 0.2512 | 4.38 | 1620 | 0.5227 | 0.6897 | 0.7935 | 0.8710 | 0.9184 | 0.8347 | 0.8438 | 0.4322 | 0.9258 | 0.8585 | 0.7408 | 0.8614 | 0.6786 | 0.7523 | 0.3617 | 0.7773 | 0.6904 | 0.7065 | | 0.3166 | 4.43 | 1640 | 0.5045 | 0.6884 | 0.7824 | 0.8706 | 0.9340 | 0.8140 | 0.8594 | 0.4231 | 0.9338 | 0.8053 | 0.7069 | 0.8569 | 0.6783 | 0.7576 | 0.3741 | 0.7811 | 0.6863 | 0.6844 | | 0.2665 | 4.49 | 1660 | 0.5188 | 0.6725 | 0.7821 | 0.8632 | 0.9365 | 0.8342 | 0.8434 | 0.4594 | 0.9219 | 0.8279 | 0.6515 | 0.8532 | 0.6581 | 0.7450 | 0.4004 | 0.7945 | 0.6334 | 0.6231 | | 0.1356 | 4.54 | 1680 | 0.5240 | 0.6725 | 0.7858 | 0.8641 | 0.9459 | 0.8062 | 0.8686 | 0.4517 | 0.8929 | 0.8764 | 0.6592 | 0.8543 | 0.6672 | 0.7614 | 0.3837 | 0.8082 | 0.6011 | 0.6317 | | 0.1726 | 4.59 | 1700 | 0.4637 | 0.6945 | 0.8060 | 0.8743 | 0.9451 | 0.7883 | 0.8735 | 0.4841 | 0.8618 | 0.8822 | 0.8067 | 0.8566 | 0.6661 | 0.7613 | 0.3600 | 0.7912 | 0.6819 | 0.7442 | | 0.2107 | 4.65 | 1720 | 0.4839 | 0.6852 | 0.7843 | 0.8710 | 0.9202 | 0.8059 | 0.8504 | 0.4318 | 0.9322 | 0.7602 | 0.7896 | 0.8540 | 0.6601 | 0.7440 | 0.3630 | 0.7887 | 0.6634 | 0.7233 | | 0.5774 | 4.7 | 1740 | 0.4662 | 0.7011 | 0.8022 | 0.8802 | 0.9174 | 0.7996 | 0.8822 | 0.4188 | 0.9135 | 0.8401 | 0.8439 | 0.8544 | 0.6604 | 0.7464 | 0.3716 | 0.8107 | 0.7122 | 0.7516 | | 0.2358 | 4.76 | 1760 | 0.4472 | 0.7068 | 0.8107 | 0.8828 | 0.9311 | 0.8246 | 0.8927 | 0.4489 | 0.9046 | 0.8607 | 0.8122 | 0.8597 | 0.6797 | 0.7467 | 0.3761 | 0.8137 | 0.7021 | 0.7692 | | 0.3879 | 4.81 | 1780 | 0.4750 | 0.6927 | 0.8127 | 0.8727 | 0.9369 | 0.8197 | 0.8730 | 0.5062 | 0.8635 | 0.9047 | 0.7850 | 0.8565 | 0.6747 | 0.7440 | 0.3610 | 0.7926 | 0.6716 | 0.7487 | | 0.2336 | 4.86 | 1800 | 0.4364 | 0.7032 | 0.8038 | 0.8804 | 0.9344 | 0.8177 | 0.8543 | 0.4869 | 0.9251 | 0.8339 | 0.7744 | 0.8552 | 0.6703 | 0.7628 | 0.3852 | 0.8153 | 0.6992 | 0.7345 | | 0.2303 | 4.92 | 1820 | 0.4305 | 0.7137 | 0.8110 | 0.8861 | 0.9226 | 0.8403 | 0.8824 | 0.4492 | 0.9362 | 0.8405 | 0.8055 | 0.8583 | 0.6809 | 0.7680 | 0.3905 | 0.8207 | 0.7112 | 0.7666 | | 0.347 | 4.97 | 1840 | 0.4280 | 0.7141 | 0.8139 | 0.8851 | 0.9320 | 0.8301 | 0.8443 | 0.4832 | 0.9180 | 0.8781 | 0.8114 | 0.8593 | 0.6741 | 0.7539 | 0.4024 | 0.8134 | 0.7139 | 0.7815 | | 0.1553 | 5.03 | 1860 | 0.4500 | 0.7075 | 0.8165 | 0.8814 | 0.9396 | 0.8204 | 0.8990 | 0.5335 | 0.9019 | 0.8580 | 0.7632 | 0.8591 | 0.6739 | 0.7626 | 0.3980 | 0.8150 | 0.7091 | 0.7347 | | 0.2451 | 5.08 | 1880 | 0.4699 | 0.6888 | 0.7985 | 0.8688 | 0.9359 | 0.7957 | 0.8490 | 0.5428 | 0.9082 | 0.8630 | 0.6949 | 0.8587 | 0.6716 | 0.7755 | 0.3843 | 0.7832 | 0.6813 | 0.6671 | | 0.3152 | 5.14 | 1900 | 0.4433 | 0.7119 | 0.8179 | 0.8849 | 0.9226 | 0.8324 | 0.8624 | 0.5160 | 0.9239 | 0.8426 | 0.8253 | 0.8586 | 0.6740 | 0.7557 | 0.3693 | 0.8188 | 0.7217 | 0.7851 | | 0.1741 | 5.19 | 1920 | 0.4381 | 0.7134 | 0.8123 | 0.8857 | 0.9351 | 0.8145 | 0.8811 | 0.4748 | 0.9139 | 0.8559 | 0.8111 | 0.8665 | 0.6844 | 0.7734 | 0.3768 | 0.8098 | 0.7102 | 0.7725 | | 0.3937 | 5.24 | 1940 | 0.4335 | 0.7107 | 0.8141 | 0.8829 | 0.9302 | 0.8452 | 0.8654 | 0.4312 | 0.8917 | 0.8908 | 0.8439 | 0.8573 | 0.6885 | 0.7681 | 0.3561 | 0.7981 | 0.7114 | 0.7955 | | 0.1683 | 5.3 | 1960 | 0.4622 | 0.6996 | 0.8016 | 0.8783 | 0.9332 | 0.8085 | 0.8798 | 0.4643 | 0.9200 | 0.8613 | 0.7441 | 0.8629 | 0.6807 | 0.7724 | 0.3641 | 0.8036 | 0.7073 | 0.7065 | | 0.2652 | 5.35 | 1980 | 0.4333 | 0.7100 | 0.8111 | 0.8843 | 0.9312 | 0.8416 | 0.8774 | 0.4394 | 0.9141 | 0.8674 | 0.8064 | 0.8667 | 0.6845 | 0.7734 | 0.3851 | 0.8110 | 0.6899 | 0.7596 | | 0.3099 | 5.41 | 2000 | 0.5586 | 0.6680 | 0.7724 | 0.8618 | 0.9282 | 0.7736 | 0.8603 | 0.4053 | 0.9206 | 0.8546 | 0.6642 | 0.8590 | 0.6737 | 0.7655 | 0.3650 | 0.7950 | 0.6093 | 0.6084 | | 0.6068 | 5.46 | 2020 | 0.5672 | 0.6591 | 0.7804 | 0.8593 | 0.9372 | 0.8178 | 0.9053 | 0.3834 | 0.8801 | 0.8836 | 0.6553 | 0.8622 | 0.6745 | 0.7522 | 0.3090 | 0.7906 | 0.6129 | 0.6120 | | 0.1649 | 5.51 | 2040 | 0.5184 | 0.6631 | 0.7703 | 0.8656 | 0.9354 | 0.8171 | 0.8841 | 0.3137 | 0.9195 | 0.8507 | 0.6712 | 0.8666 | 0.6806 | 0.7513 | 0.2728 | 0.7955 | 0.6410 | 0.6339 | | 0.3157 | 5.57 | 2060 | 0.5451 | 0.6690 | 0.7875 | 0.8620 | 0.9427 | 0.8082 | 0.8730 | 0.4344 | 0.8739 | 0.9014 | 0.6790 | 0.8628 | 0.6760 | 0.7532 | 0.3472 | 0.7876 | 0.6179 | 0.6383 | | 0.3131 | 5.62 | 2080 | 0.5506 | 0.6716 | 0.7871 | 0.8620 | 0.9387 | 0.8165 | 0.8351 | 0.5187 | 0.9105 | 0.8349 | 0.6555 | 0.8622 | 0.6806 | 0.7524 | 0.3756 | 0.7901 | 0.6138 | 0.6266 | | 0.182 | 5.68 | 2100 | 0.5583 | 0.6715 | 0.7925 | 0.8637 | 0.9354 | 0.8459 | 0.8491 | 0.4511 | 0.8906 | 0.8901 | 0.6851 | 0.8649 | 0.6686 | 0.7489 | 0.3597 | 0.7910 | 0.6146 | 0.6527 | | 0.1015 | 5.73 | 2120 | 0.4312 | 0.7065 | 0.8055 | 0.8834 | 0.9359 | 0.8468 | 0.8577 | 0.4344 | 0.9218 | 0.8473 | 0.7947 | 0.8651 | 0.6778 | 0.7628 | 0.3601 | 0.8067 | 0.7145 | 0.7588 | | 0.3909 | 5.78 | 2140 | 0.4617 | 0.6964 | 0.8043 | 0.8759 | 0.9347 | 0.8351 | 0.8596 | 0.4495 | 0.8979 | 0.8897 | 0.7638 | 0.8603 | 0.6770 | 0.7654 | 0.3572 | 0.7932 | 0.6947 | 0.7271 | | 0.1689 | 5.84 | 2160 | 0.4988 | 0.6993 | 0.8062 | 0.8769 | 0.9449 | 0.8028 | 0.8856 | 0.4745 | 0.8807 | 0.8810 | 0.7739 | 0.8537 | 0.6741 | 0.7485 | 0.3827 | 0.8025 | 0.6884 | 0.7451 | | 0.1827 | 5.89 | 2180 | 0.5481 | 0.6804 | 0.7881 | 0.8660 | 0.9235 | 0.8419 | 0.8409 | 0.4410 | 0.9261 | 0.8601 | 0.6829 | 0.8569 | 0.6627 | 0.7447 | 0.3791 | 0.7778 | 0.6845 | 0.6572 | | 0.3295 | 5.95 | 2200 | 0.4630 | 0.7049 | 0.8220 | 0.8793 | 0.9256 | 0.8432 | 0.8667 | 0.5525 | 0.8974 | 0.8611 | 0.8078 | 0.8630 | 0.6639 | 0.7531 | 0.4023 | 0.8034 | 0.6889 | 0.7596 | | 0.1909 | 6.0 | 2220 | 0.4903 | 0.6981 | 0.8002 | 0.8756 | 0.9314 | 0.8205 | 0.8716 | 0.4958 | 0.9282 | 0.8271 | 0.7268 | 0.8587 | 0.6747 | 0.7572 | 0.3987 | 0.8000 | 0.7056 | 0.6917 | | 0.294 | 6.05 | 2240 | 0.5427 | 0.6866 | 0.7984 | 0.8676 | 0.9313 | 0.8283 | 0.8406 | 0.5110 | 0.9109 | 0.8848 | 0.6818 | 0.8590 | 0.6826 | 0.7498 | 0.3909 | 0.7836 | 0.6869 | 0.6536 | | 0.2515 | 6.11 | 2260 | 0.5008 | 0.6957 | 0.8006 | 0.8741 | 0.9383 | 0.8152 | 0.8909 | 0.5185 | 0.9178 | 0.8102 | 0.7130 | 0.8617 | 0.6824 | 0.7637 | 0.3920 | 0.7969 | 0.6972 | 0.6757 | | 0.2324 | 6.16 | 2280 | 0.4600 | 0.7024 | 0.8179 | 0.8769 | 0.9364 | 0.8166 | 0.8834 | 0.5561 | 0.8863 | 0.8821 | 0.7644 | 0.8654 | 0.6800 | 0.7644 | 0.3950 | 0.7963 | 0.6892 | 0.7262 | | 0.3158 | 6.22 | 2300 | 0.4765 | 0.6958 | 0.8090 | 0.8738 | 0.9341 | 0.8288 | 0.8889 | 0.5295 | 0.8971 | 0.8324 | 0.7521 | 0.8590 | 0.6710 | 0.7602 | 0.3843 | 0.7922 | 0.6894 | 0.7145 | | 0.2189 | 6.27 | 2320 | 0.4901 | 0.6929 | 0.8023 | 0.8706 | 0.9410 | 0.8206 | 0.8470 | 0.5496 | 0.9040 | 0.8414 | 0.7130 | 0.8602 | 0.6789 | 0.7572 | 0.3968 | 0.7841 | 0.6943 | 0.6785 | | 0.1781 | 6.32 | 2340 | 0.4782 | 0.6890 | 0.7950 | 0.8716 | 0.9310 | 0.8302 | 0.9101 | 0.4230 | 0.9085 | 0.8508 | 0.7116 | 0.8592 | 0.6882 | 0.7522 | 0.3529 | 0.7885 | 0.7063 | 0.6755 | | 0.2585 | 6.38 | 2360 | 0.4923 | 0.7005 | 0.8049 | 0.8793 | 0.9283 | 0.8514 | 0.8561 | 0.4506 | 0.9227 | 0.8487 | 0.7764 | 0.8570 | 0.6852 | 0.7455 | 0.3647 | 0.8112 | 0.6981 | 0.7414 | | 0.2427 | 6.43 | 2380 | 0.4996 | 0.6936 | 0.8071 | 0.8733 | 0.9416 | 0.8346 | 0.8637 | 0.5122 | 0.8907 | 0.8682 | 0.7385 | 0.8580 | 0.6864 | 0.7578 | 0.3903 | 0.8046 | 0.6503 | 0.7078 | | 0.3861 | 6.49 | 2400 | 0.5035 | 0.7018 | 0.8042 | 0.8762 | 0.9295 | 0.8189 | 0.8695 | 0.5184 | 0.9280 | 0.8392 | 0.7258 | 0.8635 | 0.6823 | 0.7661 | 0.4107 | 0.7948 | 0.7058 | 0.6893 | | 0.2319 | 6.54 | 2420 | 0.5015 | 0.7052 | 0.8113 | 0.8770 | 0.9234 | 0.8489 | 0.8623 | 0.5606 | 0.9325 | 0.7943 | 0.7574 | 0.8624 | 0.6760 | 0.7643 | 0.4198 | 0.7921 | 0.7051 | 0.7169 | | 0.1962 | 6.59 | 2440 | 0.4653 | 0.7124 | 0.8175 | 0.8841 | 0.9419 | 0.8166 | 0.8608 | 0.5498 | 0.9069 | 0.8497 | 0.7971 | 0.8606 | 0.6803 | 0.7680 | 0.3947 | 0.8181 | 0.7048 | 0.7604 | | 0.2704 | 6.65 | 2460 | 0.4642 | 0.7087 | 0.8070 | 0.8838 | 0.9310 | 0.8264 | 0.8837 | 0.4730 | 0.9280 | 0.8015 | 0.8055 | 0.8591 | 0.6857 | 0.7674 | 0.3697 | 0.8158 | 0.6980 | 0.7654 | | 0.1438 | 6.7 | 2480 | 0.5301 | 0.6828 | 0.7982 | 0.8666 | 0.9280 | 0.8532 | 0.8514 | 0.4709 | 0.9018 | 0.8805 | 0.7018 | 0.8565 | 0.6701 | 0.7622 | 0.3586 | 0.7784 | 0.6853 | 0.6683 | | 0.3661 | 6.76 | 2500 | 0.5201 | 0.6848 | 0.7910 | 0.8679 | 0.9398 | 0.8114 | 0.8760 | 0.4659 | 0.9017 | 0.8442 | 0.6978 | 0.8554 | 0.6779 | 0.7657 | 0.3610 | 0.7814 | 0.6880 | 0.6640 | | 0.2653 | 6.81 | 2520 | 0.5765 | 0.6665 | 0.7984 | 0.8554 | 0.9237 | 0.8575 | 0.8615 | 0.5306 | 0.8848 | 0.9053 | 0.6256 | 0.8563 | 0.6737 | 0.7581 | 0.3800 | 0.7724 | 0.6313 | 0.5938 | | 0.1563 | 6.86 | 2540 | 0.5453 | 0.6770 | 0.7881 | 0.8628 | 0.9408 | 0.8165 | 0.8736 | 0.4846 | 0.8994 | 0.8492 | 0.6528 | 0.8572 | 0.6864 | 0.7601 | 0.3829 | 0.7809 | 0.6520 | 0.6195 | | 0.2804 | 6.92 | 2560 | 0.5505 | 0.6766 | 0.7966 | 0.8610 | 0.9216 | 0.8491 | 0.8750 | 0.4852 | 0.8956 | 0.8783 | 0.6715 | 0.8516 | 0.6741 | 0.7649 | 0.3962 | 0.7802 | 0.6452 | 0.6238 | | 0.2304 | 6.97 | 2580 | 0.4455 | 0.7107 | 0.8093 | 0.8830 | 0.9368 | 0.8223 | 0.8703 | 0.4649 | 0.9057 | 0.8460 | 0.8188 | 0.8645 | 0.6942 | 0.7683 | 0.3803 | 0.8019 | 0.6960 | 0.7697 | | 0.1759 | 7.03 | 2600 | 0.5370 | 0.6859 | 0.7960 | 0.8701 | 0.9369 | 0.8329 | 0.8566 | 0.4381 | 0.8981 | 0.8998 | 0.7094 | 0.8653 | 0.6820 | 0.7661 | 0.3674 | 0.7875 | 0.6583 | 0.6749 | | 0.2079 | 7.08 | 2620 | 0.5014 | 0.6916 | 0.7991 | 0.8711 | 0.9347 | 0.8112 | 0.8740 | 0.5012 | 0.9079 | 0.8548 | 0.7101 | 0.8660 | 0.6894 | 0.7706 | 0.3897 | 0.7867 | 0.6665 | 0.6724 | | 0.2464 | 7.14 | 2640 | 0.5313 | 0.6833 | 0.7998 | 0.8669 | 0.9319 | 0.8369 | 0.8416 | 0.5209 | 0.9050 | 0.8687 | 0.6939 | 0.8678 | 0.6749 | 0.7667 | 0.3777 | 0.7797 | 0.6586 | 0.6575 | | 0.0679 | 7.19 | 2660 | 0.5012 | 0.6878 | 0.7945 | 0.8698 | 0.9303 | 0.8397 | 0.8561 | 0.4667 | 0.9163 | 0.8310 | 0.7213 | 0.8590 | 0.6773 | 0.7661 | 0.3821 | 0.7905 | 0.6814 | 0.6579 | | 0.2287 | 7.24 | 2680 | 0.5399 | 0.6824 | 0.8031 | 0.8637 | 0.9228 | 0.8091 | 0.8778 | 0.5374 | 0.8858 | 0.8791 | 0.7098 | 0.8607 | 0.6758 | 0.7685 | 0.3853 | 0.7732 | 0.6663 | 0.6471 | | 0.2186 | 7.3 | 2700 | 0.5803 | 0.6891 | 0.7935 | 0.8700 | 0.9341 | 0.8192 | 0.8781 | 0.4739 | 0.9185 | 0.8475 | 0.6833 | 0.8650 | 0.6901 | 0.7697 | 0.4044 | 0.7889 | 0.6537 | 0.6519 | | 0.2762 | 7.35 | 2720 | 0.5791 | 0.6799 | 0.7893 | 0.8648 | 0.9349 | 0.8160 | 0.8592 | 0.4940 | 0.9163 | 0.8543 | 0.6502 | 0.8658 | 0.6885 | 0.7666 | 0.3916 | 0.7818 | 0.6439 | 0.6209 | | 0.185 | 7.41 | 2740 | 0.5155 | 0.6884 | 0.7950 | 0.8704 | 0.9468 | 0.8255 | 0.8475 | 0.4646 | 0.8956 | 0.8782 | 0.7065 | 0.8621 | 0.6823 | 0.7629 | 0.3701 | 0.7829 | 0.6797 | 0.6787 | | 0.1529 | 7.46 | 2760 | 0.5215 | 0.6802 | 0.7893 | 0.8672 | 0.9248 | 0.8309 | 0.8764 | 0.4373 | 0.9220 | 0.8504 | 0.6837 | 0.8582 | 0.6872 | 0.7597 | 0.3669 | 0.7949 | 0.6466 | 0.6479 | | 0.2866 | 7.51 | 2780 | 0.4837 | 0.7017 | 0.8069 | 0.8778 | 0.9290 | 0.8349 | 0.8724 | 0.4779 | 0.9149 | 0.8582 | 0.7612 | 0.8661 | 0.6901 | 0.7653 | 0.3819 | 0.7965 | 0.6911 | 0.7208 | | 0.2183 | 7.57 | 2800 | 0.5563 | 0.6749 | 0.7957 | 0.8633 | 0.9363 | 0.8242 | 0.8763 | 0.5241 | 0.9018 | 0.8704 | 0.6367 | 0.8688 | 0.6848 | 0.7624 | 0.3946 | 0.7942 | 0.6070 | 0.6125 | | 0.2455 | 7.62 | 2820 | 0.4623 | 0.7067 | 0.8074 | 0.8813 | 0.9398 | 0.8230 | 0.8494 | 0.4948 | 0.9147 | 0.8463 | 0.7835 | 0.8679 | 0.6843 | 0.7629 | 0.3926 | 0.8029 | 0.6898 | 0.7464 | | 0.1664 | 7.68 | 2840 | 0.4660 | 0.7028 | 0.7985 | 0.8806 | 0.9395 | 0.8033 | 0.8812 | 0.4409 | 0.9158 | 0.8221 | 0.7864 | 0.8630 | 0.6901 | 0.7662 | 0.3531 | 0.8021 | 0.6947 | 0.7501 | | 1.0029 | 7.73 | 2860 | 0.4397 | 0.7231 | 0.8275 | 0.8908 | 0.9374 | 0.8533 | 0.8568 | 0.5143 | 0.9030 | 0.8616 | 0.8659 | 0.8697 | 0.6823 | 0.7656 | 0.3693 | 0.8134 | 0.7337 | 0.8276 | | 0.2047 | 7.78 | 2880 | 0.4525 | 0.7203 | 0.8150 | 0.8896 | 0.9485 | 0.8296 | 0.8770 | 0.4709 | 0.9067 | 0.8402 | 0.8319 | 0.8628 | 0.6917 | 0.7703 | 0.3786 | 0.8198 | 0.7211 | 0.7977 | | 0.1632 | 7.84 | 2900 | 0.4483 | 0.7172 | 0.8198 | 0.8861 | 0.9323 | 0.8009 | 0.8895 | 0.5222 | 0.9038 | 0.8572 | 0.8327 | 0.8696 | 0.6841 | 0.7643 | 0.4035 | 0.8062 | 0.7007 | 0.7918 | | 0.072 | 7.89 | 2920 | 0.4501 | 0.7155 | 0.8190 | 0.8847 | 0.9373 | 0.8090 | 0.8660 | 0.5574 | 0.9087 | 0.8422 | 0.8126 | 0.8675 | 0.6784 | 0.7714 | 0.4133 | 0.8067 | 0.6940 | 0.7769 | | 0.9618 | 7.95 | 2940 | 0.5323 | 0.6991 | 0.7988 | 0.8759 | 0.9291 | 0.8388 | 0.8511 | 0.4605 | 0.9285 | 0.8381 | 0.7453 | 0.8670 | 0.6856 | 0.7605 | 0.3777 | 0.7853 | 0.7145 | 0.7035 | | 0.1425 | 8.0 | 2960 | 0.4843 | 0.6921 | 0.8009 | 0.8733 | 0.9347 | 0.7719 | 0.8522 | 0.5143 | 0.8942 | 0.8577 | 0.7811 | 0.8645 | 0.6450 | 0.7588 | 0.3731 | 0.7844 | 0.7045 | 0.7144 | | 0.1813 | 8.05 | 2980 | 0.4979 | 0.7072 | 0.8054 | 0.8825 | 0.9384 | 0.7601 | 0.8637 | 0.4972 | 0.9082 | 0.8651 | 0.8054 | 0.8662 | 0.6573 | 0.7656 | 0.3857 | 0.8013 | 0.7092 | 0.7650 | | 0.2996 | 8.11 | 3000 | 0.4599 | 0.7154 | 0.8211 | 0.8854 | 0.9316 | 0.8283 | 0.8601 | 0.5269 | 0.9060 | 0.8602 | 0.8345 | 0.8690 | 0.6880 | 0.7628 | 0.3920 | 0.8073 | 0.7011 | 0.7879 | | 0.2983 | 8.16 | 3020 | 0.4657 | 0.7168 | 0.8177 | 0.8876 | 0.9388 | 0.8314 | 0.8762 | 0.4802 | 0.9060 | 0.8649 | 0.8265 | 0.8734 | 0.6929 | 0.7619 | 0.3872 | 0.8092 | 0.6967 | 0.7967 | | 0.512 | 8.22 | 3040 | 0.4672 | 0.7148 | 0.8119 | 0.8864 | 0.9414 | 0.7956 | 0.8628 | 0.5067 | 0.9141 | 0.8484 | 0.8144 | 0.8713 | 0.6848 | 0.7671 | 0.3808 | 0.8064 | 0.6993 | 0.7937 | | 0.182 | 8.27 | 3060 | 0.4480 | 0.7153 | 0.8190 | 0.8851 | 0.9432 | 0.8298 | 0.8509 | 0.5410 | 0.9032 | 0.8446 | 0.8200 | 0.8692 | 0.6906 | 0.7711 | 0.3787 | 0.8038 | 0.7074 | 0.7867 | | 0.1986 | 8.32 | 3080 | 0.5153 | 0.7015 | 0.8026 | 0.8773 | 0.9438 | 0.7877 | 0.8787 | 0.4991 | 0.9038 | 0.8646 | 0.7408 | 0.8691 | 0.6709 | 0.7734 | 0.3863 | 0.7858 | 0.7162 | 0.7085 | | 0.1252 | 8.38 | 3100 | 0.5256 | 0.7018 | 0.8026 | 0.8768 | 0.9358 | 0.8417 | 0.8622 | 0.4587 | 0.9148 | 0.8675 | 0.7376 | 0.8674 | 0.6960 | 0.7640 | 0.3816 | 0.7857 | 0.7131 | 0.7050 | | 0.1778 | 8.43 | 3120 | 0.5156 | 0.7006 | 0.8025 | 0.8762 | 0.9295 | 0.7772 | 0.8867 | 0.4904 | 0.9104 | 0.8760 | 0.7474 | 0.8714 | 0.6709 | 0.7701 | 0.3884 | 0.7796 | 0.7155 | 0.7081 | | 0.4537 | 8.49 | 3140 | 0.4896 | 0.7111 | 0.8117 | 0.8850 | 0.9302 | 0.7755 | 0.8865 | 0.4924 | 0.9149 | 0.8773 | 0.8049 | 0.8734 | 0.6599 | 0.7760 | 0.3906 | 0.8035 | 0.7110 | 0.7636 | | 0.2145 | 8.54 | 3160 | 0.4882 | 0.7216 | 0.8248 | 0.8874 | 0.9386 | 0.8280 | 0.8589 | 0.5382 | 0.9001 | 0.8789 | 0.8309 | 0.8747 | 0.6816 | 0.7719 | 0.4142 | 0.8002 | 0.7277 | 0.7810 | | 0.122 | 8.59 | 3180 | 0.5472 | 0.7025 | 0.8031 | 0.8754 | 0.9395 | 0.8099 | 0.8762 | 0.5051 | 0.9096 | 0.8586 | 0.7224 | 0.8657 | 0.6846 | 0.7729 | 0.4120 | 0.7849 | 0.7169 | 0.6803 | | 0.1508 | 8.65 | 3200 | 0.5601 | 0.6966 | 0.8057 | 0.8721 | 0.9367 | 0.8412 | 0.8686 | 0.5247 | 0.9049 | 0.8520 | 0.7121 | 0.8637 | 0.6709 | 0.7671 | 0.3982 | 0.7770 | 0.7219 | 0.6773 | | 0.4993 | 8.7 | 3220 | 0.5737 | 0.6972 | 0.7957 | 0.8731 | 0.9402 | 0.8048 | 0.8753 | 0.4891 | 0.9151 | 0.8364 | 0.7090 | 0.8643 | 0.6825 | 0.7628 | 0.3978 | 0.7795 | 0.7202 | 0.6736 | | 0.0767 | 8.76 | 3240 | 0.5454 | 0.7052 | 0.8102 | 0.8760 | 0.9274 | 0.8521 | 0.8674 | 0.5045 | 0.9141 | 0.8621 | 0.7441 | 0.8693 | 0.6900 | 0.7596 | 0.4048 | 0.7759 | 0.7264 | 0.7102 | | 0.2786 | 8.81 | 3260 | 0.5146 | 0.7033 | 0.8035 | 0.8766 | 0.9393 | 0.8303 | 0.8532 | 0.4791 | 0.9087 | 0.8694 | 0.7443 | 0.8675 | 0.6943 | 0.7525 | 0.3987 | 0.7831 | 0.7178 | 0.7093 | | 0.1633 | 8.86 | 3280 | 0.4591 | 0.7194 | 0.8222 | 0.8865 | 0.9428 | 0.8341 | 0.8836 | 0.5065 | 0.8904 | 0.8704 | 0.8274 | 0.8671 | 0.6946 | 0.7677 | 0.3988 | 0.8039 | 0.7112 | 0.7924 | | 0.1259 | 8.92 | 3300 | 0.4320 | 0.7223 | 0.8275 | 0.8895 | 0.9368 | 0.8330 | 0.9041 | 0.5151 | 0.8898 | 0.8378 | 0.8757 | 0.8727 | 0.6900 | 0.7687 | 0.3766 | 0.8055 | 0.7210 | 0.8215 | | 0.2201 | 8.97 | 3320 | 0.4566 | 0.7287 | 0.8239 | 0.8938 | 0.9394 | 0.8272 | 0.8730 | 0.4734 | 0.9048 | 0.8717 | 0.8778 | 0.8750 | 0.6905 | 0.7656 | 0.3990 | 0.8140 | 0.7268 | 0.8300 | | 0.2125 | 9.03 | 3340 | 0.4679 | 0.7176 | 0.8216 | 0.8858 | 0.9358 | 0.8377 | 0.8983 | 0.4901 | 0.8962 | 0.8793 | 0.8140 | 0.8647 | 0.6957 | 0.7710 | 0.3897 | 0.8068 | 0.7199 | 0.7753 | | 0.1314 | 9.08 | 3360 | 0.4641 | 0.7203 | 0.8144 | 0.8878 | 0.9404 | 0.8227 | 0.8681 | 0.4839 | 0.9150 | 0.8544 | 0.8164 | 0.8645 | 0.6968 | 0.7780 | 0.3938 | 0.8116 | 0.7206 | 0.7770 | | 0.2678 | 9.14 | 3380 | 0.5029 | 0.7158 | 0.8139 | 0.8847 | 0.9369 | 0.8421 | 0.8687 | 0.4804 | 0.9142 | 0.8587 | 0.7962 | 0.8680 | 0.6944 | 0.7741 | 0.3978 | 0.8013 | 0.7197 | 0.7554 | | 0.1653 | 9.19 | 3400 | 0.5625 | 0.7036 | 0.8104 | 0.8762 | 0.9346 | 0.8486 | 0.8761 | 0.5090 | 0.9074 | 0.8631 | 0.7343 | 0.8683 | 0.6925 | 0.7689 | 0.3983 | 0.7838 | 0.7123 | 0.7009 | | 0.1075 | 9.24 | 3420 | 0.5050 | 0.7067 | 0.8133 | 0.8773 | 0.9398 | 0.8176 | 0.8944 | 0.5225 | 0.8919 | 0.8846 | 0.7425 | 0.8704 | 0.6955 | 0.7710 | 0.4031 | 0.7823 | 0.7195 | 0.7049 | | 0.2607 | 9.3 | 3440 | 0.5197 | 0.7070 | 0.8086 | 0.8785 | 0.9407 | 0.8058 | 0.8881 | 0.5119 | 0.9015 | 0.8580 | 0.7543 | 0.8664 | 0.6911 | 0.7765 | 0.3871 | 0.7863 | 0.7193 | 0.7219 | | 0.111 | 9.35 | 3460 | 0.5327 | 0.7099 | 0.8098 | 0.8817 | 0.9416 | 0.8412 | 0.8594 | 0.4685 | 0.9082 | 0.8837 | 0.7658 | 0.8725 | 0.6938 | 0.7697 | 0.3851 | 0.7902 | 0.7220 | 0.7362 | | 0.1358 | 9.41 | 3480 | 0.4572 | 0.7143 | 0.8155 | 0.8857 | 0.9449 | 0.8100 | 0.8780 | 0.4886 | 0.8912 | 0.8628 | 0.8328 | 0.8675 | 0.6818 | 0.7675 | 0.3769 | 0.8055 | 0.7211 | 0.7794 | | 0.1794 | 9.46 | 3500 | 0.4778 | 0.7072 | 0.8141 | 0.8799 | 0.9367 | 0.8324 | 0.8530 | 0.5274 | 0.9060 | 0.8690 | 0.7742 | 0.8703 | 0.6888 | 0.7636 | 0.3918 | 0.7955 | 0.7094 | 0.7308 | | 0.1146 | 9.51 | 3520 | 0.5337 | 0.6935 | 0.7966 | 0.8740 | 0.9459 | 0.8011 | 0.8545 | 0.4980 | 0.9119 | 0.8569 | 0.7083 | 0.8687 | 0.6794 | 0.7643 | 0.3924 | 0.7935 | 0.6795 | 0.6768 | | 0.4693 | 9.57 | 3540 | 0.5688 | 0.6864 | 0.7860 | 0.8732 | 0.9407 | 0.7510 | 0.8634 | 0.4659 | 0.9212 | 0.8391 | 0.7205 | 0.8703 | 0.6482 | 0.7612 | 0.3688 | 0.7911 | 0.6820 | 0.6830 | | 0.2297 | 9.62 | 3560 | 0.5465 | 0.7049 | 0.8135 | 0.8772 | 0.9337 | 0.8205 | 0.8839 | 0.5210 | 0.8966 | 0.8817 | 0.7569 | 0.8679 | 0.6903 | 0.7676 | 0.3856 | 0.7855 | 0.7209 | 0.7163 | | 0.3738 | 9.68 | 3580 | 0.5458 | 0.7063 | 0.8052 | 0.8797 | 0.9394 | 0.8195 | 0.8574 | 0.5006 | 0.9146 | 0.8267 | 0.7783 | 0.8682 | 0.6861 | 0.7628 | 0.3779 | 0.7898 | 0.7244 | 0.7348 | | 0.2401 | 9.73 | 3600 | 0.5446 | 0.7028 | 0.8113 | 0.8772 | 0.9306 | 0.8440 | 0.8732 | 0.5133 | 0.9136 | 0.8654 | 0.7392 | 0.8709 | 0.6858 | 0.7633 | 0.4003 | 0.7900 | 0.7039 | 0.7054 | | 0.1552 | 9.78 | 3620 | 0.5462 | 0.7034 | 0.8091 | 0.8772 | 0.9337 | 0.8208 | 0.8948 | 0.5019 | 0.9061 | 0.8639 | 0.7422 | 0.8720 | 0.6944 | 0.7614 | 0.4068 | 0.7898 | 0.6974 | 0.7019 | | 0.1767 | 9.84 | 3640 | 0.6458 | 0.6895 | 0.7931 | 0.8700 | 0.9378 | 0.8240 | 0.8680 | 0.4922 | 0.9234 | 0.8316 | 0.6745 | 0.8725 | 0.6978 | 0.7662 | 0.4067 | 0.7840 | 0.6572 | 0.6423 | | 0.2452 | 9.89 | 3660 | 0.5251 | 0.6978 | 0.8087 | 0.8739 | 0.9411 | 0.8178 | 0.8852 | 0.5412 | 0.8968 | 0.8593 | 0.7194 | 0.8721 | 0.6928 | 0.7677 | 0.4042 | 0.7878 | 0.6807 | 0.6794 | | 0.218 | 9.95 | 3680 | 0.5541 | 0.6987 | 0.8074 | 0.8745 | 0.9378 | 0.8340 | 0.8495 | 0.5413 | 0.9072 | 0.8368 | 0.7451 | 0.8711 | 0.6906 | 0.7643 | 0.3786 | 0.7824 | 0.7028 | 0.7010 | | 0.1928 | 10.0 | 3700 | 0.5603 | 0.7023 | 0.8083 | 0.8778 | 0.9418 | 0.8415 | 0.8670 | 0.4924 | 0.9043 | 0.8692 | 0.7416 | 0.8697 | 0.6857 | 0.7684 | 0.3845 | 0.7907 | 0.7082 | 0.7089 | | 0.0984 | 10.05 | 3720 | 0.6013 | 0.6959 | 0.7949 | 0.8758 | 0.9430 | 0.7542 | 0.8797 | 0.4777 | 0.9055 | 0.8645 | 0.7396 | 0.8699 | 0.6738 | 0.7583 | 0.3702 | 0.7868 | 0.7053 | 0.7073 | | 0.1346 | 10.11 | 3740 | 0.5829 | 0.7016 | 0.8074 | 0.8764 | 0.9393 | 0.7869 | 0.8904 | 0.5164 | 0.8944 | 0.8757 | 0.7488 | 0.8710 | 0.6825 | 0.7643 | 0.3911 | 0.7843 | 0.7127 | 0.7055 | | 0.1479 | 10.16 | 3760 | 0.4795 | 0.7208 | 0.8207 | 0.8893 | 0.9386 | 0.7929 | 0.8455 | 0.5388 | 0.9065 | 0.8620 | 0.8604 | 0.8707 | 0.6719 | 0.7647 | 0.4008 | 0.8139 | 0.7273 | 0.7960 | | 0.193 | 10.22 | 3780 | 0.4772 | 0.7096 | 0.8125 | 0.8833 | 0.9302 | 0.7916 | 0.8914 | 0.5130 | 0.9141 | 0.8452 | 0.8021 | 0.8722 | 0.6818 | 0.7675 | 0.4099 | 0.8108 | 0.6772 | 0.7481 | | 0.1574 | 10.27 | 3800 | 0.4449 | 0.7268 | 0.8243 | 0.8923 | 0.9417 | 0.8150 | 0.8765 | 0.5202 | 0.9099 | 0.8649 | 0.8421 | 0.8742 | 0.6931 | 0.7703 | 0.4116 | 0.8200 | 0.7169 | 0.8017 | | 0.1357 | 10.32 | 3820 | 0.4419 | 0.7214 | 0.8302 | 0.8891 | 0.9352 | 0.8044 | 0.8781 | 0.5679 | 0.8897 | 0.8569 | 0.8792 | 0.8756 | 0.6765 | 0.7712 | 0.3742 | 0.8061 | 0.7319 | 0.8142 | | 0.1049 | 10.38 | 3840 | 0.4425 | 0.7260 | 0.8296 | 0.8915 | 0.9362 | 0.7928 | 0.8675 | 0.5635 | 0.8963 | 0.8647 | 0.8861 | 0.8760 | 0.6718 | 0.7747 | 0.3990 | 0.8098 | 0.7364 | 0.8142 | | 0.1607 | 10.43 | 3860 | 0.4764 | 0.7204 | 0.8283 | 0.8885 | 0.9317 | 0.8047 | 0.8821 | 0.5544 | 0.8937 | 0.8585 | 0.8735 | 0.8748 | 0.6710 | 0.7674 | 0.3788 | 0.8028 | 0.7337 | 0.8141 | | 0.7998 | 10.49 | 3880 | 0.4903 | 0.7190 | 0.8258 | 0.8866 | 0.9293 | 0.8216 | 0.8807 | 0.5352 | 0.8990 | 0.8687 | 0.8463 | 0.8689 | 0.6759 | 0.7684 | 0.4070 | 0.8060 | 0.7214 | 0.7852 | | 0.1199 | 10.54 | 3900 | 0.4547 | 0.7258 | 0.8253 | 0.8905 | 0.9312 | 0.8032 | 0.8797 | 0.5204 | 0.9023 | 0.8627 | 0.8772 | 0.8707 | 0.6839 | 0.7741 | 0.4092 | 0.8106 | 0.7323 | 0.7998 | | 0.1326 | 10.59 | 3920 | 0.4905 | 0.7125 | 0.8102 | 0.8818 | 0.9367 | 0.7898 | 0.8574 | 0.5258 | 0.9087 | 0.8482 | 0.8050 | 0.8691 | 0.6801 | 0.7665 | 0.4044 | 0.7903 | 0.7306 | 0.7463 | | 0.1629 | 10.65 | 3940 | 0.5135 | 0.7051 | 0.8160 | 0.8778 | 0.9379 | 0.8330 | 0.8334 | 0.5501 | 0.8931 | 0.8747 | 0.7895 | 0.8722 | 0.6742 | 0.7511 | 0.3941 | 0.7827 | 0.7289 | 0.7323 | | 0.0679 | 10.7 | 3960 | 0.5245 | 0.7076 | 0.8094 | 0.8792 | 0.9409 | 0.8088 | 0.8733 | 0.5049 | 0.8983 | 0.8658 | 0.7735 | 0.8711 | 0.6837 | 0.7620 | 0.3979 | 0.7853 | 0.7317 | 0.7214 | | 0.1393 | 10.76 | 3980 | 0.5436 | 0.7136 | 0.8117 | 0.8829 | 0.9367 | 0.8293 | 0.8717 | 0.4982 | 0.9166 | 0.8495 | 0.7798 | 0.8723 | 0.6868 | 0.7678 | 0.4089 | 0.7928 | 0.7266 | 0.7397 | | 0.1078 | 10.81 | 4000 | 0.5010 | 0.7127 | 0.8149 | 0.8819 | 0.9452 | 0.8207 | 0.8718 | 0.5382 | 0.9047 | 0.8634 | 0.7601 | 0.8705 | 0.6890 | 0.7707 | 0.4062 | 0.7944 | 0.7293 | 0.7291 | | 0.1262 | 10.86 | 4020 | 0.5119 | 0.7137 | 0.8120 | 0.8831 | 0.9401 | 0.8343 | 0.8624 | 0.4949 | 0.9146 | 0.8643 | 0.7732 | 0.8707 | 0.6901 | 0.7673 | 0.4050 | 0.7960 | 0.7303 | 0.7366 | | 0.2813 | 10.92 | 4040 | 0.4873 | 0.7121 | 0.8212 | 0.8824 | 0.9391 | 0.8373 | 0.8584 | 0.5629 | 0.9044 | 0.8678 | 0.7788 | 0.8733 | 0.6904 | 0.7699 | 0.3986 | 0.7993 | 0.7094 | 0.7439 | | 0.6835 | 10.97 | 4060 | 0.5088 | 0.6994 | 0.8073 | 0.8751 | 0.9275 | 0.8088 | 0.8660 | 0.5367 | 0.9130 | 0.8390 | 0.7598 | 0.8694 | 0.6796 | 0.7653 | 0.3894 | 0.7855 | 0.6999 | 0.7069 | | 0.24 | 11.03 | 4080 | 0.5099 | 0.7101 | 0.8181 | 0.8817 | 0.9348 | 0.8374 | 0.8767 | 0.5382 | 0.9085 | 0.8531 | 0.7782 | 0.8722 | 0.6903 | 0.7680 | 0.3917 | 0.7992 | 0.7114 | 0.7378 | | 0.1172 | 11.08 | 4100 | 0.5336 | 0.7054 | 0.8088 | 0.8794 | 0.9448 | 0.8377 | 0.8661 | 0.5007 | 0.9066 | 0.8568 | 0.7493 | 0.8678 | 0.6831 | 0.7684 | 0.3967 | 0.7963 | 0.7119 | 0.7139 | | 0.0705 | 11.14 | 4120 | 0.5258 | 0.7049 | 0.8014 | 0.8798 | 0.9424 | 0.8152 | 0.8762 | 0.4608 | 0.9168 | 0.8534 | 0.7447 | 0.8689 | 0.6900 | 0.7707 | 0.3883 | 0.7960 | 0.7123 | 0.7084 | | 0.1683 | 11.19 | 4140 | 0.4890 | 0.7091 | 0.8063 | 0.8829 | 0.9440 | 0.7922 | 0.8761 | 0.4739 | 0.9035 | 0.8588 | 0.7957 | 0.8690 | 0.6781 | 0.7642 | 0.3927 | 0.8021 | 0.7156 | 0.7418 | | 0.0792 | 11.24 | 4160 | 0.5043 | 0.7072 | 0.8172 | 0.8798 | 0.9436 | 0.8273 | 0.8587 | 0.5521 | 0.8918 | 0.8655 | 0.7812 | 0.8713 | 0.6821 | 0.7606 | 0.3962 | 0.7943 | 0.7169 | 0.7292 | | 0.1486 | 11.3 | 4180 | 0.5367 | 0.7052 | 0.8043 | 0.8783 | 0.9389 | 0.7943 | 0.8553 | 0.5066 | 0.9108 | 0.8652 | 0.7592 | 0.8733 | 0.6778 | 0.7633 | 0.4170 | 0.7877 | 0.7134 | 0.7040 | | 0.2621 | 11.35 | 4200 | 0.5333 | 0.7016 | 0.8055 | 0.8770 | 0.9428 | 0.8055 | 0.8440 | 0.5177 | 0.9030 | 0.8714 | 0.7542 | 0.8717 | 0.6815 | 0.7596 | 0.3952 | 0.7884 | 0.7123 | 0.7029 | | 0.1573 | 11.41 | 4220 | 0.5311 | 0.7036 | 0.8070 | 0.8775 | 0.9407 | 0.8258 | 0.8659 | 0.4995 | 0.9045 | 0.8626 | 0.7499 | 0.8707 | 0.6917 | 0.7638 | 0.3966 | 0.7893 | 0.7121 | 0.7007 | | 0.1706 | 11.46 | 4240 | 0.5298 | 0.7062 | 0.8133 | 0.8774 | 0.9385 | 0.8210 | 0.8889 | 0.5300 | 0.8954 | 0.8700 | 0.7495 | 0.8715 | 0.6947 | 0.7709 | 0.4077 | 0.7856 | 0.7115 | 0.7016 | | 0.2682 | 11.51 | 4260 | 0.5481 | 0.7082 | 0.8057 | 0.8801 | 0.9444 | 0.8204 | 0.8799 | 0.4890 | 0.9099 | 0.8406 | 0.7554 | 0.8715 | 0.6969 | 0.7734 | 0.3951 | 0.7902 | 0.7176 | 0.7128 | | 0.1008 | 11.57 | 4280 | 0.5343 | 0.7053 | 0.8131 | 0.8770 | 0.9272 | 0.7950 | 0.8888 | 0.5527 | 0.9074 | 0.8655 | 0.7551 | 0.8723 | 0.6802 | 0.7748 | 0.4058 | 0.7848 | 0.7189 | 0.7000 | | 0.2386 | 11.62 | 4300 | 0.5017 | 0.7097 | 0.8228 | 0.8813 | 0.9308 | 0.7722 | 0.8942 | 0.5822 | 0.8842 | 0.8695 | 0.8265 | 0.8662 | 0.6711 | 0.7772 | 0.3790 | 0.8012 | 0.7206 | 0.7526 | | 0.1102 | 11.68 | 4320 | 0.4944 | 0.7106 | 0.8179 | 0.8823 | 0.9286 | 0.8126 | 0.8744 | 0.5304 | 0.9009 | 0.8469 | 0.8317 | 0.8592 | 0.6791 | 0.7732 | 0.3770 | 0.8077 | 0.7172 | 0.7611 | | 0.1461 | 11.73 | 4340 | 0.5308 | 0.7016 | 0.8080 | 0.8756 | 0.9270 | 0.8166 | 0.8748 | 0.5144 | 0.9086 | 0.8492 | 0.7654 | 0.8572 | 0.6779 | 0.7672 | 0.3964 | 0.7956 | 0.7171 | 0.6998 | | 0.1593 | 11.78 | 4360 | 0.5414 | 0.7097 | 0.8131 | 0.8799 | 0.9415 | 0.8112 | 0.8727 | 0.5428 | 0.9044 | 0.8635 | 0.7555 | 0.8653 | 0.6940 | 0.7638 | 0.4048 | 0.7962 | 0.7219 | 0.7224 | | 0.1039 | 11.84 | 4380 | 0.5111 | 0.7156 | 0.8199 | 0.8830 | 0.9364 | 0.8319 | 0.8651 | 0.5438 | 0.9069 | 0.8736 | 0.7819 | 0.8673 | 0.6925 | 0.7635 | 0.4138 | 0.7998 | 0.7295 | 0.7431 | | 0.3453 | 11.89 | 4400 | 0.5305 | 0.7131 | 0.8093 | 0.8818 | 0.9432 | 0.8346 | 0.8722 | 0.4977 | 0.9165 | 0.8497 | 0.7513 | 0.8652 | 0.6879 | 0.7682 | 0.4254 | 0.7981 | 0.7278 | 0.7194 | | 0.15 | 11.95 | 4420 | 0.4693 | 0.7259 | 0.8316 | 0.8903 | 0.9435 | 0.8364 | 0.8837 | 0.5534 | 0.8908 | 0.8672 | 0.8460 | 0.8715 | 0.6885 | 0.7678 | 0.4070 | 0.8116 | 0.7273 | 0.8075 | | 0.1132 | 12.0 | 4440 | 0.4752 | 0.7248 | 0.8355 | 0.8893 | 0.9381 | 0.8157 | 0.8643 | 0.6057 | 0.8898 | 0.8787 | 0.8560 | 0.8722 | 0.6864 | 0.7675 | 0.3969 | 0.8100 | 0.7267 | 0.8141 | | 0.2272 | 12.05 | 4460 | 0.4776 | 0.7244 | 0.8240 | 0.8911 | 0.9364 | 0.7727 | 0.8572 | 0.5785 | 0.9150 | 0.8516 | 0.8565 | 0.8748 | 0.6708 | 0.7655 | 0.4108 | 0.8158 | 0.7212 | 0.8117 | | 0.1862 | 12.11 | 4480 | 0.4954 | 0.7204 | 0.8241 | 0.8878 | 0.9394 | 0.8316 | 0.8658 | 0.5470 | 0.9084 | 0.8582 | 0.8186 | 0.8730 | 0.6965 | 0.7638 | 0.4046 | 0.8122 | 0.7081 | 0.7845 | | 0.1436 | 12.16 | 4500 | 0.5070 | 0.7154 | 0.8182 | 0.8840 | 0.9417 | 0.8228 | 0.8489 | 0.5516 | 0.9087 | 0.8635 | 0.7903 | 0.8700 | 0.6895 | 0.7569 | 0.4171 | 0.8040 | 0.7163 | 0.7537 | | 0.1581 | 12.22 | 4520 | 0.4918 | 0.7174 | 0.8263 | 0.8842 | 0.9284 | 0.8167 | 0.8722 | 0.5734 | 0.9050 | 0.8823 | 0.8058 | 0.8716 | 0.6850 | 0.7606 | 0.4183 | 0.8006 | 0.7239 | 0.7619 | | 0.2003 | 12.27 | 4540 | 0.5508 | 0.7075 | 0.8139 | 0.8776 | 0.9303 | 0.8128 | 0.8467 | 0.5666 | 0.9128 | 0.8689 | 0.7590 | 0.8691 | 0.6917 | 0.7535 | 0.4125 | 0.7878 | 0.7209 | 0.7167 | | 0.1785 | 12.32 | 4560 | 0.4795 | 0.7141 | 0.8262 | 0.8825 | 0.9354 | 0.8445 | 0.8594 | 0.5835 | 0.9014 | 0.8716 | 0.7876 | 0.8715 | 0.6871 | 0.7638 | 0.4103 | 0.7987 | 0.7186 | 0.7488 | | 0.0645 | 12.38 | 4580 | 0.4897 | 0.7221 | 0.8265 | 0.8875 | 0.9328 | 0.8254 | 0.8690 | 0.5660 | 0.9127 | 0.8652 | 0.8145 | 0.8731 | 0.6958 | 0.7662 | 0.4194 | 0.8097 | 0.7179 | 0.7727 | | 0.0784 | 12.43 | 4600 | 0.5016 | 0.7248 | 0.8310 | 0.8896 | 0.9303 | 0.8241 | 0.8822 | 0.5576 | 0.9062 | 0.8807 | 0.8362 | 0.8728 | 0.6910 | 0.7655 | 0.4182 | 0.8151 | 0.7209 | 0.7901 | | 0.1183 | 12.49 | 4620 | 0.5362 | 0.7072 | 0.8020 | 0.8802 | 0.9433 | 0.8042 | 0.8626 | 0.4774 | 0.9161 | 0.8566 | 0.7538 | 0.8702 | 0.6923 | 0.7646 | 0.3956 | 0.7930 | 0.7202 | 0.7145 | | 0.2538 | 12.54 | 4640 | 0.5996 | 0.7075 | 0.8150 | 0.8784 | 0.9358 | 0.8371 | 0.8669 | 0.5344 | 0.9051 | 0.8757 | 0.7496 | 0.8722 | 0.6942 | 0.7627 | 0.4062 | 0.7872 | 0.7139 | 0.7161 | | 0.1557 | 12.59 | 4660 | 0.5350 | 0.7070 | 0.8149 | 0.8787 | 0.9377 | 0.8244 | 0.8555 | 0.5539 | 0.9038 | 0.8654 | 0.7638 | 0.8742 | 0.6887 | 0.7628 | 0.4033 | 0.7875 | 0.7079 | 0.7249 | | 0.3497 | 12.65 | 4680 | 0.4915 | 0.7138 | 0.8195 | 0.8835 | 0.9416 | 0.8156 | 0.8645 | 0.5468 | 0.8973 | 0.8716 | 0.7994 | 0.8758 | 0.6877 | 0.7662 | 0.4042 | 0.7962 | 0.7115 | 0.7552 | | 0.1298 | 12.7 | 4700 | 0.4880 | 0.7154 | 0.8223 | 0.8841 | 0.9379 | 0.7987 | 0.8712 | 0.5789 | 0.9026 | 0.8717 | 0.7947 | 0.8782 | 0.6819 | 0.7713 | 0.4105 | 0.7964 | 0.7150 | 0.7545 | | 0.248 | 12.76 | 4720 | 0.5498 | 0.7106 | 0.8102 | 0.8817 | 0.9356 | 0.8214 | 0.8667 | 0.5055 | 0.9203 | 0.8617 | 0.7605 | 0.8763 | 0.6887 | 0.7685 | 0.4113 | 0.7928 | 0.7108 | 0.7255 | | 0.0969 | 12.81 | 4740 | 0.5653 | 0.7107 | 0.8169 | 0.8805 | 0.9369 | 0.8377 | 0.8712 | 0.5441 | 0.9098 | 0.8652 | 0.7536 | 0.8739 | 0.6900 | 0.7678 | 0.4176 | 0.7918 | 0.7133 | 0.7204 | | 0.1095 | 12.86 | 4760 | 0.5436 | 0.7105 | 0.8125 | 0.8808 | 0.9459 | 0.8153 | 0.8896 | 0.5127 | 0.8978 | 0.8735 | 0.7524 | 0.8727 | 0.6978 | 0.7675 | 0.4101 | 0.7922 | 0.7139 | 0.7190 | | 0.1964 | 12.92 | 4780 | 0.6304 | 0.7005 | 0.8044 | 0.8752 | 0.9413 | 0.8388 | 0.8647 | 0.5172 | 0.9168 | 0.8495 | 0.7027 | 0.8674 | 0.6873 | 0.7677 | 0.4157 | 0.7904 | 0.7023 | 0.6725 | | 0.1341 | 12.97 | 4800 | 0.5768 | 0.7055 | 0.8045 | 0.8782 | 0.9402 | 0.8154 | 0.8773 | 0.4902 | 0.9136 | 0.8653 | 0.7298 | 0.8686 | 0.6967 | 0.7681 | 0.4123 | 0.7939 | 0.7056 | 0.6930 | | 0.1123 | 13.03 | 4820 | 0.5759 | 0.7002 | 0.8034 | 0.8755 | 0.9434 | 0.8259 | 0.8765 | 0.4800 | 0.9013 | 0.8750 | 0.7221 | 0.8651 | 0.6934 | 0.7657 | 0.3973 | 0.7897 | 0.7014 | 0.6886 | | 0.1658 | 13.08 | 4840 | 0.5776 | 0.7006 | 0.8076 | 0.8748 | 0.9364 | 0.8288 | 0.8806 | 0.5232 | 0.9092 | 0.8625 | 0.7121 | 0.8665 | 0.6883 | 0.7674 | 0.4129 | 0.7891 | 0.7019 | 0.6778 | | 0.1839 | 13.14 | 4860 | 0.5867 | 0.6981 | 0.8042 | 0.8744 | 0.9386 | 0.8375 | 0.8730 | 0.4946 | 0.9098 | 0.8683 | 0.7075 | 0.8647 | 0.6890 | 0.7616 | 0.4011 | 0.7914 | 0.7040 | 0.6747 | | 0.1099 | 13.19 | 4880 | 0.6082 | 0.6983 | 0.8013 | 0.8742 | 0.9445 | 0.8159 | 0.8635 | 0.5103 | 0.9103 | 0.8655 | 0.6990 | 0.8639 | 0.6907 | 0.7644 | 0.4066 | 0.7923 | 0.7000 | 0.6699 | | 0.3081 | 13.24 | 4900 | 0.5948 | 0.6993 | 0.8048 | 0.8745 | 0.9346 | 0.8352 | 0.8682 | 0.5033 | 0.9140 | 0.8640 | 0.7146 | 0.8655 | 0.6920 | 0.7625 | 0.4031 | 0.7891 | 0.7025 | 0.6804 | | 0.0714 | 13.3 | 4920 | 0.5051 | 0.7049 | 0.8191 | 0.8782 | 0.9342 | 0.8449 | 0.8754 | 0.5468 | 0.8989 | 0.8811 | 0.7522 | 0.8685 | 0.6846 | 0.7704 | 0.3944 | 0.7956 | 0.7036 | 0.7175 | | 0.2305 | 13.35 | 4940 | 0.5408 | 0.7004 | 0.8093 | 0.8756 | 0.9396 | 0.8220 | 0.8793 | 0.5384 | 0.9057 | 0.8616 | 0.7183 | 0.8691 | 0.6900 | 0.7697 | 0.3954 | 0.7909 | 0.7016 | 0.6861 | | 0.2512 | 13.41 | 4960 | 0.5822 | 0.6990 | 0.8107 | 0.8738 | 0.9390 | 0.8140 | 0.8907 | 0.5550 | 0.8980 | 0.8683 | 0.7096 | 0.8679 | 0.6899 | 0.7657 | 0.4019 | 0.7876 | 0.7041 | 0.6757 | | 0.1393 | 13.46 | 4980 | 0.5820 | 0.6959 | 0.8083 | 0.8723 | 0.9446 | 0.8042 | 0.8774 | 0.5541 | 0.8882 | 0.8804 | 0.7092 | 0.8667 | 0.6888 | 0.7699 | 0.3841 | 0.7838 | 0.7022 | 0.6755 | | 0.9497 | 13.51 | 5000 | 0.5431 | 0.6950 | 0.8085 | 0.8726 | 0.9424 | 0.8437 | 0.8893 | 0.5207 | 0.8900 | 0.8553 | 0.7182 | 0.8657 | 0.6915 | 0.7660 | 0.3743 | 0.7847 | 0.7006 | 0.6824 | | 0.0806 | 13.57 | 5020 | 0.5681 | 0.6932 | 0.8071 | 0.8730 | 0.9460 | 0.8298 | 0.8792 | 0.5086 | 0.8840 | 0.8783 | 0.7235 | 0.8651 | 0.6882 | 0.7591 | 0.3640 | 0.7898 | 0.6952 | 0.6910 | | 0.1022 | 13.62 | 5040 | 0.6025 | 0.7020 | 0.8031 | 0.8773 | 0.9384 | 0.8149 | 0.8617 | 0.5091 | 0.9205 | 0.8461 | 0.7313 | 0.8709 | 0.6938 | 0.7583 | 0.3988 | 0.7930 | 0.7026 | 0.6969 | | 0.184 | 13.68 | 5060 | 0.5221 | 0.7129 | 0.8152 | 0.8834 | 0.9440 | 0.8107 | 0.8484 | 0.5483 | 0.9078 | 0.8577 | 0.7891 | 0.8723 | 0.6916 | 0.7579 | 0.4009 | 0.8017 | 0.7134 | 0.7525 | | 0.1923 | 13.73 | 5080 | 0.5278 | 0.7074 | 0.8231 | 0.8794 | 0.9363 | 0.8139 | 0.8365 | 0.6253 | 0.8986 | 0.8638 | 0.7869 | 0.8725 | 0.6876 | 0.7527 | 0.3841 | 0.7960 | 0.7094 | 0.7491 | | 0.1851 | 13.78 | 5100 | 0.5243 | 0.7208 | 0.8293 | 0.8880 | 0.9352 | 0.8263 | 0.8504 | 0.5776 | 0.9032 | 0.8726 | 0.8400 | 0.8708 | 0.6909 | 0.7572 | 0.4059 | 0.8167 | 0.7142 | 0.7900 | | 0.0883 | 13.84 | 5120 | 0.5024 | 0.7236 | 0.8229 | 0.8902 | 0.9409 | 0.7976 | 0.8571 | 0.5421 | 0.9083 | 0.8774 | 0.8370 | 0.8729 | 0.6898 | 0.7625 | 0.4175 | 0.8193 | 0.7105 | 0.7929 | | 0.195 | 13.89 | 5140 | 0.4746 | 0.7218 | 0.8266 | 0.8886 | 0.9399 | 0.8212 | 0.8697 | 0.5583 | 0.9008 | 0.8562 | 0.8400 | 0.8729 | 0.6952 | 0.7671 | 0.4038 | 0.8141 | 0.7114 | 0.7878 | | 0.0844 | 13.95 | 5160 | 0.4909 | 0.7205 | 0.8265 | 0.8880 | 0.9351 | 0.8371 | 0.8583 | 0.5528 | 0.9070 | 0.8559 | 0.8395 | 0.8718 | 0.6946 | 0.7642 | 0.3979 | 0.8136 | 0.7127 | 0.7886 | | 0.1474 | 14.0 | 5180 | 0.4922 | 0.7258 | 0.8272 | 0.8905 | 0.9388 | 0.8234 | 0.8750 | 0.5452 | 0.9064 | 0.8615 | 0.8401 | 0.8720 | 0.6999 | 0.7722 | 0.4129 | 0.8181 | 0.7126 | 0.7933 | | 0.0503 | 14.05 | 5200 | 0.5318 | 0.7121 | 0.8123 | 0.8824 | 0.9386 | 0.8375 | 0.8687 | 0.4913 | 0.9146 | 0.8767 | 0.7582 | 0.8721 | 0.6998 | 0.7690 | 0.4109 | 0.7996 | 0.7093 | 0.7243 | | 1.5633 | 14.11 | 5220 | 0.5514 | 0.7080 | 0.8110 | 0.8790 | 0.9361 | 0.8298 | 0.8660 | 0.5116 | 0.9110 | 0.8762 | 0.7465 | 0.8721 | 0.6993 | 0.7625 | 0.4142 | 0.7913 | 0.7101 | 0.7068 | | 0.1624 | 14.16 | 5240 | 0.5814 | 0.7004 | 0.8080 | 0.8748 | 0.9385 | 0.8203 | 0.8895 | 0.5267 | 0.9042 | 0.8668 | 0.7101 | 0.8666 | 0.6915 | 0.7616 | 0.4128 | 0.7906 | 0.7032 | 0.6764 | | 0.1234 | 14.22 | 5260 | 0.5338 | 0.7057 | 0.8118 | 0.8776 | 0.9384 | 0.8265 | 0.8866 | 0.5304 | 0.9078 | 0.8708 | 0.7222 | 0.8691 | 0.6993 | 0.7745 | 0.4113 | 0.7951 | 0.7052 | 0.6853 | | 0.0554 | 14.27 | 5280 | 0.6325 | 0.7015 | 0.8086 | 0.8754 | 0.9377 | 0.8343 | 0.8675 | 0.5338 | 0.9147 | 0.8722 | 0.6998 | 0.8688 | 0.6954 | 0.7702 | 0.4123 | 0.7921 | 0.7013 | 0.6701 | | 0.115 | 14.32 | 5300 | 0.4993 | 0.7210 | 0.8234 | 0.8861 | 0.9333 | 0.8439 | 0.8784 | 0.5337 | 0.9141 | 0.8679 | 0.7922 | 0.8755 | 0.6996 | 0.7750 | 0.4152 | 0.7987 | 0.7285 | 0.7545 | | 0.1561 | 14.38 | 5320 | 0.5592 | 0.7135 | 0.8152 | 0.8818 | 0.9352 | 0.8277 | 0.8701 | 0.5345 | 0.9173 | 0.8638 | 0.7574 | 0.8746 | 0.7009 | 0.7720 | 0.4173 | 0.7938 | 0.7157 | 0.7203 | | 0.0848 | 14.43 | 5340 | 0.5579 | 0.7164 | 0.8164 | 0.8833 | 0.9436 | 0.8026 | 0.8806 | 0.5433 | 0.9054 | 0.8765 | 0.7631 | 0.8745 | 0.6958 | 0.7746 | 0.4219 | 0.7952 | 0.7242 | 0.7287 | | 0.1941 | 14.49 | 5360 | 0.5586 | 0.7189 | 0.8199 | 0.8843 | 0.9371 | 0.8163 | 0.8892 | 0.5516 | 0.9127 | 0.8632 | 0.7692 | 0.8737 | 0.6961 | 0.7774 | 0.4221 | 0.7964 | 0.7319 | 0.7347 | | 0.1339 | 14.54 | 5380 | 0.5690 | 0.7114 | 0.8206 | 0.8800 | 0.9383 | 0.8260 | 0.8833 | 0.5868 | 0.9054 | 0.8599 | 0.7444 | 0.8714 | 0.6967 | 0.7769 | 0.4115 | 0.7933 | 0.7164 | 0.7138 | | 0.1054 | 14.59 | 5400 | 0.5522 | 0.7119 | 0.8216 | 0.8797 | 0.9407 | 0.8214 | 0.8717 | 0.5864 | 0.8963 | 0.8839 | 0.7510 | 0.8725 | 0.6988 | 0.7761 | 0.4120 | 0.7897 | 0.7169 | 0.7175 | | 0.1314 | 14.65 | 5420 | 0.5695 | 0.7134 | 0.8164 | 0.8815 | 0.9420 | 0.8324 | 0.8740 | 0.5469 | 0.9087 | 0.8569 | 0.7540 | 0.8724 | 0.6957 | 0.7754 | 0.4228 | 0.7945 | 0.7145 | 0.7183 | | 0.2918 | 14.7 | 5440 | 0.5720 | 0.7115 | 0.8167 | 0.8806 | 0.9385 | 0.8318 | 0.8726 | 0.5263 | 0.9036 | 0.8930 | 0.7509 | 0.8738 | 0.6959 | 0.7796 | 0.4111 | 0.7905 | 0.7138 | 0.7155 | | 0.1543 | 14.76 | 5460 | 0.5569 | 0.7137 | 0.8163 | 0.8817 | 0.9396 | 0.8374 | 0.8688 | 0.5230 | 0.9078 | 0.8829 | 0.7549 | 0.8741 | 0.6978 | 0.7713 | 0.4232 | 0.7927 | 0.7180 | 0.7187 | | 0.1186 | 14.81 | 5480 | 0.5482 | 0.7212 | 0.8233 | 0.8853 | 0.9366 | 0.8389 | 0.8538 | 0.5453 | 0.9084 | 0.8849 | 0.7954 | 0.8771 | 0.6978 | 0.7658 | 0.4222 | 0.7920 | 0.7386 | 0.7551 | | 0.1624 | 14.86 | 5500 | 0.5372 | 0.7189 | 0.8240 | 0.8842 | 0.9355 | 0.8337 | 0.8733 | 0.5587 | 0.9046 | 0.8694 | 0.7930 | 0.8756 | 0.6953 | 0.7717 | 0.4070 | 0.7904 | 0.7386 | 0.7533 | | 0.9141 | 14.92 | 5520 | 0.5415 | 0.7166 | 0.8202 | 0.8832 | 0.9415 | 0.8160 | 0.8734 | 0.5625 | 0.9007 | 0.8602 | 0.7873 | 0.8744 | 0.6917 | 0.7683 | 0.4067 | 0.7896 | 0.7359 | 0.7495 | | 0.1429 | 14.97 | 5540 | 0.5310 | 0.7171 | 0.8185 | 0.8837 | 0.9397 | 0.8161 | 0.8610 | 0.5489 | 0.9098 | 0.8795 | 0.7743 | 0.8751 | 0.6908 | 0.7675 | 0.4220 | 0.7939 | 0.7309 | 0.7396 | | 0.21 | 15.03 | 5560 | 0.4918 | 0.7230 | 0.8230 | 0.8871 | 0.9395 | 0.8298 | 0.8797 | 0.5404 | 0.9100 | 0.8671 | 0.7944 | 0.8748 | 0.6930 | 0.7726 | 0.4246 | 0.7999 | 0.7373 | 0.7586 | | 0.1884 | 15.08 | 5580 | 0.5099 | 0.7153 | 0.8160 | 0.8831 | 0.9437 | 0.8006 | 0.8602 | 0.5582 | 0.9087 | 0.8716 | 0.7686 | 0.8741 | 0.6895 | 0.7719 | 0.4206 | 0.7963 | 0.7214 | 0.7331 | | 0.1516 | 15.14 | 5600 | 0.5335 | 0.7137 | 0.8168 | 0.8823 | 0.9372 | 0.8281 | 0.8823 | 0.5498 | 0.9161 | 0.8459 | 0.7585 | 0.8757 | 0.6937 | 0.7763 | 0.4186 | 0.7949 | 0.7140 | 0.7230 | | 0.1574 | 15.19 | 5620 | 0.5654 | 0.7136 | 0.8156 | 0.8818 | 0.9410 | 0.8196 | 0.8757 | 0.5507 | 0.9120 | 0.8582 | 0.7521 | 0.8756 | 0.6989 | 0.7708 | 0.4237 | 0.7942 | 0.7134 | 0.7186 | | 0.0803 | 15.24 | 5640 | 0.5721 | 0.7136 | 0.8178 | 0.8815 | 0.9429 | 0.8285 | 0.8653 | 0.5589 | 0.9080 | 0.8754 | 0.7458 | 0.8739 | 0.6983 | 0.7693 | 0.4272 | 0.7956 | 0.7165 | 0.7142 | | 0.1424 | 15.3 | 5660 | 0.5583 | 0.7127 | 0.8150 | 0.8812 | 0.9406 | 0.8168 | 0.8676 | 0.5656 | 0.9163 | 0.8526 | 0.7456 | 0.8734 | 0.6944 | 0.7721 | 0.4269 | 0.7956 | 0.7123 | 0.7142 | | 0.0757 | 15.35 | 5680 | 0.5655 | 0.7106 | 0.8140 | 0.8803 | 0.9388 | 0.8168 | 0.8697 | 0.5482 | 0.9116 | 0.8631 | 0.7501 | 0.8729 | 0.6917 | 0.7703 | 0.4162 | 0.7925 | 0.7139 | 0.7164 | | 0.1092 | 15.41 | 5700 | 0.5550 | 0.7114 | 0.8160 | 0.8808 | 0.9429 | 0.8282 | 0.8617 | 0.5558 | 0.9075 | 0.8621 | 0.7535 | 0.8742 | 0.6944 | 0.7703 | 0.4152 | 0.7926 | 0.7138 | 0.7195 | | 0.1833 | 15.46 | 5720 | 0.4969 | 0.7304 | 0.8307 | 0.8930 | 0.9412 | 0.8311 | 0.8812 | 0.5391 | 0.9056 | 0.8764 | 0.8401 | 0.8759 | 0.6980 | 0.7712 | 0.4282 | 0.8201 | 0.7182 | 0.8013 | | 0.0929 | 15.51 | 5740 | 0.5019 | 0.7272 | 0.8228 | 0.8918 | 0.9443 | 0.8274 | 0.8651 | 0.5131 | 0.9135 | 0.8714 | 0.8244 | 0.8757 | 0.6980 | 0.7731 | 0.4210 | 0.8170 | 0.7180 | 0.7873 | | 0.1537 | 15.57 | 5760 | 0.5682 | 0.7139 | 0.8192 | 0.8818 | 0.9327 | 0.8587 | 0.8672 | 0.5456 | 0.9196 | 0.8503 | 0.7602 | 0.8732 | 0.6913 | 0.7723 | 0.4284 | 0.7955 | 0.7165 | 0.7204 | | 0.0488 | 15.62 | 5780 | 0.5516 | 0.7133 | 0.8158 | 0.8819 | 0.9407 | 0.8418 | 0.8711 | 0.5235 | 0.9096 | 0.8678 | 0.7562 | 0.8725 | 0.6953 | 0.7739 | 0.4228 | 0.7961 | 0.7140 | 0.7185 | | 0.1074 | 15.68 | 5800 | 0.5703 | 0.7111 | 0.8162 | 0.8808 | 0.9394 | 0.8297 | 0.8875 | 0.5129 | 0.8991 | 0.8886 | 0.7560 | 0.8723 | 0.6930 | 0.7699 | 0.4142 | 0.7925 | 0.7174 | 0.7185 | | 0.0859 | 15.73 | 5820 | 0.6073 | 0.7116 | 0.8095 | 0.8817 | 0.9371 | 0.8186 | 0.8766 | 0.5036 | 0.9199 | 0.8565 | 0.7546 | 0.8715 | 0.6931 | 0.7697 | 0.4136 | 0.7964 | 0.7204 | 0.7164 | | 0.1068 | 15.78 | 5840 | 0.5386 | 0.7113 | 0.8205 | 0.8807 | 0.9382 | 0.8395 | 0.8582 | 0.5640 | 0.9029 | 0.8745 | 0.7664 | 0.8724 | 0.6948 | 0.7691 | 0.3998 | 0.7936 | 0.7235 | 0.7259 | | 0.1288 | 15.84 | 5860 | 0.5640 | 0.7078 | 0.8112 | 0.8790 | 0.9459 | 0.8185 | 0.8664 | 0.5460 | 0.9050 | 0.8530 | 0.7438 | 0.8705 | 0.6968 | 0.7707 | 0.3983 | 0.7905 | 0.7155 | 0.7125 | | 0.161 | 15.89 | 5880 | 0.6023 | 0.7055 | 0.8088 | 0.8782 | 0.9439 | 0.8222 | 0.8378 | 0.5346 | 0.9084 | 0.8665 | 0.7484 | 0.8716 | 0.6953 | 0.7624 | 0.3912 | 0.7886 | 0.7156 | 0.7141 | | 0.2896 | 15.95 | 5900 | 0.6422 | 0.7098 | 0.8131 | 0.8799 | 0.9374 | 0.8238 | 0.8658 | 0.5354 | 0.9128 | 0.8696 | 0.7469 | 0.8726 | 0.6964 | 0.7694 | 0.4089 | 0.7913 | 0.7165 | 0.7133 | | 0.1331 | 16.0 | 5920 | 0.6165 | 0.7120 | 0.8111 | 0.8815 | 0.9381 | 0.8166 | 0.8853 | 0.5089 | 0.9154 | 0.8665 | 0.7471 | 0.8742 | 0.6978 | 0.7667 | 0.4211 | 0.7942 | 0.7162 | 0.7140 | | 0.0851 | 16.05 | 5940 | 0.5548 | 0.7146 | 0.8151 | 0.8835 | 0.9402 | 0.8268 | 0.8766 | 0.5103 | 0.9110 | 0.8780 | 0.7630 | 0.8772 | 0.6979 | 0.7719 | 0.4141 | 0.7961 | 0.7195 | 0.7256 | | 0.1627 | 16.11 | 5960 | 0.6263 | 0.7120 | 0.8142 | 0.8817 | 0.9388 | 0.8262 | 0.8748 | 0.5333 | 0.9158 | 0.8616 | 0.7488 | 0.8774 | 0.6959 | 0.7727 | 0.4138 | 0.7927 | 0.7140 | 0.7176 | | 0.1197 | 16.16 | 5980 | 0.5825 | 0.7085 | 0.8212 | 0.8792 | 0.9366 | 0.8377 | 0.8605 | 0.5824 | 0.9000 | 0.8640 | 0.7672 | 0.8767 | 0.6925 | 0.7708 | 0.3867 | 0.7857 | 0.7185 | 0.7288 | | 0.5273 | 16.22 | 6000 | 0.5988 | 0.7157 | 0.8174 | 0.8830 | 0.9368 | 0.8146 | 0.8820 | 0.5475 | 0.9120 | 0.8567 | 0.7720 | 0.8767 | 0.6938 | 0.7706 | 0.4224 | 0.7928 | 0.7219 | 0.7321 | | 0.1381 | 16.27 | 6020 | 0.5692 | 0.7150 | 0.8190 | 0.8827 | 0.9362 | 0.8343 | 0.8639 | 0.5474 | 0.9134 | 0.8709 | 0.7665 | 0.8773 | 0.6931 | 0.7721 | 0.4248 | 0.7940 | 0.7158 | 0.7276 | | 0.0793 | 16.32 | 6040 | 0.5893 | 0.7136 | 0.8163 | 0.8818 | 0.9345 | 0.8222 | 0.8708 | 0.5475 | 0.9168 | 0.8616 | 0.7604 | 0.8758 | 0.6929 | 0.7744 | 0.4245 | 0.7928 | 0.7118 | 0.7232 | | 0.0582 | 16.38 | 6060 | 0.7212 | 0.7032 | 0.8038 | 0.8757 | 0.9439 | 0.8061 | 0.8635 | 0.5485 | 0.9181 | 0.8428 | 0.7036 | 0.8632 | 0.6901 | 0.7688 | 0.4265 | 0.7936 | 0.7034 | 0.6770 | | 0.1339 | 16.43 | 6080 | 0.5848 | 0.7123 | 0.8164 | 0.8810 | 0.9412 | 0.8312 | 0.8795 | 0.5399 | 0.9070 | 0.8696 | 0.7461 | 0.8715 | 0.6968 | 0.7737 | 0.4204 | 0.7956 | 0.7149 | 0.7131 | | 0.1311 | 16.49 | 6100 | 0.6171 | 0.7109 | 0.8072 | 0.8811 | 0.9419 | 0.8082 | 0.8586 | 0.5156 | 0.9201 | 0.8596 | 0.7466 | 0.8714 | 0.6964 | 0.7690 | 0.4151 | 0.7955 | 0.7154 | 0.7138 | | 0.0856 | 16.54 | 6120 | 0.6195 | 0.7095 | 0.8073 | 0.8804 | 0.9374 | 0.8099 | 0.8548 | 0.5201 | 0.9239 | 0.8550 | 0.7499 | 0.8722 | 0.6938 | 0.7662 | 0.4083 | 0.7932 | 0.7162 | 0.7163 | | 0.8163 | 16.59 | 6140 | 0.5405 | 0.7141 | 0.8122 | 0.8836 | 0.9403 | 0.8288 | 0.8689 | 0.5015 | 0.9177 | 0.8631 | 0.7647 | 0.8734 | 0.6956 | 0.7713 | 0.4100 | 0.7994 | 0.7193 | 0.7294 | | 0.0893 | 16.65 | 6160 | 0.5658 | 0.7105 | 0.8122 | 0.8809 | 0.9414 | 0.8283 | 0.8689 | 0.5119 | 0.9088 | 0.8746 | 0.7517 | 0.8725 | 0.6944 | 0.7686 | 0.4096 | 0.7936 | 0.7184 | 0.7168 | | 0.0869 | 16.7 | 6180 | 0.5834 | 0.7103 | 0.8113 | 0.8811 | 0.9394 | 0.8424 | 0.8676 | 0.4995 | 0.9163 | 0.8702 | 0.7437 | 0.8731 | 0.6978 | 0.7695 | 0.4088 | 0.7950 | 0.7158 | 0.7120 | | 0.161 | 16.76 | 6200 | 0.5759 | 0.7121 | 0.8129 | 0.8817 | 0.9404 | 0.8337 | 0.8744 | 0.5100 | 0.9125 | 0.8678 | 0.7516 | 0.8738 | 0.6992 | 0.7728 | 0.4140 | 0.7951 | 0.7147 | 0.7154 | | 0.1898 | 16.81 | 6220 | 0.5838 | 0.7121 | 0.8098 | 0.8821 | 0.9413 | 0.8366 | 0.8608 | 0.5007 | 0.9211 | 0.8593 | 0.7492 | 0.8740 | 0.6970 | 0.7721 | 0.4164 | 0.7967 | 0.7128 | 0.7156 | | 2.3534 | 16.86 | 6240 | 0.5930 | 0.7118 | 0.8086 | 0.8822 | 0.9427 | 0.8268 | 0.8857 | 0.4846 | 0.9158 | 0.8565 | 0.7483 | 0.8732 | 0.6989 | 0.7740 | 0.4120 | 0.7968 | 0.7125 | 0.7153 | | 0.0658 | 16.92 | 6260 | 0.5076 | 0.7169 | 0.8225 | 0.8842 | 0.9412 | 0.8316 | 0.8670 | 0.5666 | 0.9066 | 0.8665 | 0.7779 | 0.8747 | 0.6946 | 0.7772 | 0.4169 | 0.8013 | 0.7137 | 0.7397 | | 0.1386 | 16.97 | 6280 | 0.5100 | 0.7244 | 0.8309 | 0.8893 | 0.9373 | 0.8286 | 0.8770 | 0.5876 | 0.9056 | 0.8446 | 0.8356 | 0.8735 | 0.6964 | 0.7752 | 0.4061 | 0.8133 | 0.7103 | 0.7960 | | 0.0797 | 17.03 | 6300 | 0.4916 | 0.7254 | 0.8309 | 0.8904 | 0.9378 | 0.8263 | 0.8680 | 0.5825 | 0.9079 | 0.8548 | 0.8389 | 0.8751 | 0.6940 | 0.7768 | 0.4061 | 0.8155 | 0.7121 | 0.7983 | | 0.1281 | 17.08 | 6320 | 0.4981 | 0.7263 | 0.8313 | 0.8909 | 0.9397 | 0.8092 | 0.8823 | 0.5861 | 0.9015 | 0.8531 | 0.8472 | 0.8762 | 0.6908 | 0.7721 | 0.4094 | 0.8148 | 0.7163 | 0.8045 | | 0.0712 | 17.14 | 6340 | 0.5308 | 0.7242 | 0.8318 | 0.8892 | 0.9362 | 0.8222 | 0.8747 | 0.5963 | 0.9033 | 0.8473 | 0.8428 | 0.8729 | 0.6944 | 0.7704 | 0.4005 | 0.8126 | 0.7158 | 0.8030 | | 0.1443 | 17.19 | 6360 | 0.5330 | 0.7112 | 0.8118 | 0.8815 | 0.9431 | 0.8179 | 0.8822 | 0.5203 | 0.9094 | 0.8550 | 0.7545 | 0.8727 | 0.6979 | 0.7716 | 0.4045 | 0.7951 | 0.7158 | 0.7210 | | 0.7862 | 17.24 | 6380 | 0.6242 | 0.7111 | 0.8107 | 0.8811 | 0.9461 | 0.8333 | 0.8621 | 0.5086 | 0.9098 | 0.8705 | 0.7442 | 0.8721 | 0.7009 | 0.7679 | 0.4137 | 0.7949 | 0.7152 | 0.7128 | | 0.1323 | 17.3 | 6400 | 0.6169 | 0.7062 | 0.8104 | 0.8782 | 0.9392 | 0.8061 | 0.8697 | 0.5477 | 0.9125 | 0.8703 | 0.7276 | 0.8739 | 0.6952 | 0.7672 | 0.4245 | 0.7943 | 0.6948 | 0.6935 | | 0.0704 | 17.35 | 6420 | 0.6165 | 0.7112 | 0.8108 | 0.8808 | 0.9378 | 0.8084 | 0.8616 | 0.5438 | 0.9205 | 0.8516 | 0.7521 | 0.8733 | 0.6905 | 0.7665 | 0.4271 | 0.7943 | 0.7113 | 0.7151 | | 0.1044 | 17.41 | 6440 | 0.5954 | 0.7101 | 0.8119 | 0.8807 | 0.9396 | 0.8048 | 0.8641 | 0.5539 | 0.9155 | 0.8492 | 0.7560 | 0.8735 | 0.6838 | 0.7672 | 0.4176 | 0.7936 | 0.7167 | 0.7183 | | 0.4188 | 17.46 | 6460 | 0.6219 | 0.7138 | 0.8113 | 0.8822 | 0.9381 | 0.8040 | 0.8722 | 0.5351 | 0.9202 | 0.8501 | 0.7597 | 0.8739 | 0.6925 | 0.7673 | 0.4190 | 0.7933 | 0.7264 | 0.7242 | | 0.0606 | 17.51 | 6480 | 0.5436 | 0.7196 | 0.8189 | 0.8850 | 0.9411 | 0.8160 | 0.8758 | 0.5394 | 0.9080 | 0.8697 | 0.7826 | 0.8745 | 0.6983 | 0.7684 | 0.4206 | 0.7967 | 0.7355 | 0.7431 | | 0.1067 | 17.57 | 6500 | 0.5200 | 0.7233 | 0.8264 | 0.8882 | 0.9433 | 0.8147 | 0.8653 | 0.5683 | 0.9000 | 0.8729 | 0.8203 | 0.8740 | 0.6962 | 0.7670 | 0.4042 | 0.8055 | 0.7336 | 0.7828 | | 0.3848 | 17.62 | 6520 | 0.4962 | 0.7292 | 0.8336 | 0.8920 | 0.9366 | 0.8121 | 0.8657 | 0.5910 | 0.9033 | 0.8612 | 0.8653 | 0.8755 | 0.6931 | 0.7688 | 0.3959 | 0.8123 | 0.7385 | 0.8205 | | 0.1252 | 17.68 | 6540 | 0.5172 | 0.7192 | 0.8187 | 0.8850 | 0.9383 | 0.8123 | 0.8679 | 0.5446 | 0.9126 | 0.8695 | 0.7854 | 0.8739 | 0.6940 | 0.7705 | 0.4168 | 0.7979 | 0.7371 | 0.7439 | | 0.1498 | 17.73 | 6560 | 0.4809 | 0.7283 | 0.8310 | 0.8920 | 0.9402 | 0.8245 | 0.8852 | 0.5427 | 0.8983 | 0.8750 | 0.8509 | 0.8728 | 0.6941 | 0.7754 | 0.4045 | 0.8164 | 0.7263 | 0.8084 | | 0.1339 | 17.78 | 6580 | 0.4834 | 0.7283 | 0.8310 | 0.8921 | 0.9358 | 0.8345 | 0.8709 | 0.5393 | 0.9060 | 0.8791 | 0.8517 | 0.8737 | 0.6922 | 0.7721 | 0.4075 | 0.8162 | 0.7265 | 0.8096 | | 0.155 | 17.84 | 6600 | 0.5174 | 0.7280 | 0.8253 | 0.8921 | 0.9397 | 0.8201 | 0.8806 | 0.5166 | 0.9097 | 0.8698 | 0.8405 | 0.8729 | 0.6928 | 0.7705 | 0.4115 | 0.8164 | 0.7283 | 0.8032 | | 0.3213 | 17.89 | 6620 | 0.5081 | 0.7286 | 0.8303 | 0.8922 | 0.9377 | 0.8294 | 0.8755 | 0.5476 | 0.9070 | 0.8652 | 0.8499 | 0.8738 | 0.6948 | 0.7714 | 0.4071 | 0.8164 | 0.7258 | 0.8106 | | 0.178 | 17.95 | 6640 | 0.5023 | 0.7329 | 0.8289 | 0.8945 | 0.9379 | 0.8278 | 0.8713 | 0.5284 | 0.9133 | 0.8571 | 0.8664 | 0.8737 | 0.6960 | 0.7717 | 0.4138 | 0.8183 | 0.7347 | 0.8219 | | 0.1809 | 18.0 | 6660 | 0.5378 | 0.7303 | 0.8290 | 0.8932 | 0.9422 | 0.8208 | 0.8670 | 0.5306 | 0.9038 | 0.8885 | 0.8499 | 0.8736 | 0.6950 | 0.7706 | 0.4134 | 0.8182 | 0.7297 | 0.8116 | | 0.1183 | 18.05 | 6680 | 0.5358 | 0.7305 | 0.8232 | 0.8940 | 0.9387 | 0.8151 | 0.8823 | 0.4934 | 0.9152 | 0.8635 | 0.8547 | 0.8726 | 0.6962 | 0.7696 | 0.4086 | 0.8196 | 0.7322 | 0.8144 | | 0.0902 | 18.11 | 6700 | 0.5166 | 0.7308 | 0.8300 | 0.8935 | 0.9395 | 0.8267 | 0.8612 | 0.5466 | 0.9113 | 0.8730 | 0.8521 | 0.8743 | 0.6937 | 0.7691 | 0.4176 | 0.8206 | 0.7297 | 0.8106 | | 0.2487 | 18.16 | 6720 | 0.5290 | 0.7296 | 0.8283 | 0.8928 | 0.9401 | 0.8130 | 0.8690 | 0.5405 | 0.9087 | 0.8849 | 0.8418 | 0.8745 | 0.6961 | 0.7675 | 0.4223 | 0.8203 | 0.7235 | 0.8032 | | 0.1186 | 18.22 | 6740 | 0.5202 | 0.7302 | 0.8264 | 0.8933 | 0.9429 | 0.7918 | 0.8686 | 0.5522 | 0.9070 | 0.8609 | 0.8612 | 0.8746 | 0.6945 | 0.7690 | 0.4091 | 0.8174 | 0.7308 | 0.8161 | | 0.3732 | 18.27 | 6760 | 0.5339 | 0.7253 | 0.8236 | 0.8912 | 0.9418 | 0.8295 | 0.8555 | 0.5244 | 0.9130 | 0.8635 | 0.8371 | 0.8738 | 0.7020 | 0.7671 | 0.4021 | 0.8184 | 0.7166 | 0.7974 | | 0.155 | 18.32 | 6780 | 0.5497 | 0.7263 | 0.8239 | 0.8913 | 0.9405 | 0.8221 | 0.8754 | 0.5133 | 0.9112 | 0.8795 | 0.8255 | 0.8741 | 0.7025 | 0.7702 | 0.4144 | 0.8192 | 0.7174 | 0.7866 | | 0.2159 | 18.38 | 6800 | 0.5251 | 0.7267 | 0.8279 | 0.8913 | 0.9412 | 0.8193 | 0.8767 | 0.5469 | 0.9052 | 0.8675 | 0.8383 | 0.8737 | 0.7002 | 0.7723 | 0.4077 | 0.8191 | 0.7180 | 0.7958 | | 0.1727 | 18.43 | 6820 | 0.5323 | 0.7267 | 0.8290 | 0.8914 | 0.9391 | 0.8435 | 0.8636 | 0.5414 | 0.9093 | 0.8668 | 0.8397 | 0.8741 | 0.7019 | 0.7696 | 0.4110 | 0.8198 | 0.7153 | 0.7952 | | 0.1184 | 18.49 | 6840 | 0.5390 | 0.7279 | 0.8275 | 0.8919 | 0.9443 | 0.8292 | 0.8675 | 0.5370 | 0.9058 | 0.8745 | 0.8344 | 0.8733 | 0.7014 | 0.7688 | 0.4164 | 0.8206 | 0.7187 | 0.7961 | | 0.1718 | 18.54 | 6860 | 0.5446 | 0.7227 | 0.8228 | 0.8885 | 0.9388 | 0.8258 | 0.8711 | 0.5280 | 0.9106 | 0.8750 | 0.8106 | 0.8737 | 0.7009 | 0.7685 | 0.4147 | 0.8117 | 0.7202 | 0.7690 | | 0.1154 | 18.59 | 6880 | 0.5651 | 0.7153 | 0.8143 | 0.8835 | 0.9458 | 0.8170 | 0.8670 | 0.5288 | 0.9084 | 0.8668 | 0.7665 | 0.8726 | 0.6994 | 0.7697 | 0.4155 | 0.7994 | 0.7197 | 0.7306 | | 0.1404 | 18.65 | 6900 | 0.5538 | 0.7159 | 0.8190 | 0.8833 | 0.9356 | 0.8293 | 0.8771 | 0.5495 | 0.9156 | 0.8586 | 0.7676 | 0.8738 | 0.6984 | 0.7709 | 0.4208 | 0.7993 | 0.7173 | 0.7309 | | 0.071 | 18.7 | 6920 | 0.5250 | 0.7262 | 0.8285 | 0.8901 | 0.9375 | 0.8250 | 0.8717 | 0.5567 | 0.9094 | 0.8740 | 0.8249 | 0.8751 | 0.7000 | 0.7706 | 0.4210 | 0.8148 | 0.7190 | 0.7828 | | 0.0938 | 18.76 | 6940 | 0.5374 | 0.7237 | 0.8247 | 0.8881 | 0.9400 | 0.8258 | 0.8701 | 0.5568 | 0.9119 | 0.8683 | 0.7999 | 0.8746 | 0.7015 | 0.7696 | 0.4281 | 0.8101 | 0.7185 | 0.7633 | | 0.1624 | 18.81 | 6960 | 0.5468 | 0.7202 | 0.8220 | 0.8855 | 0.9412 | 0.8236 | 0.8749 | 0.5559 | 0.9097 | 0.8739 | 0.7748 | 0.8744 | 0.7019 | 0.7724 | 0.4289 | 0.8040 | 0.7197 | 0.7398 | | 0.0766 | 18.86 | 6980 | 0.5889 | 0.7146 | 0.8156 | 0.8817 | 0.9370 | 0.8305 | 0.8660 | 0.5483 | 0.9193 | 0.8562 | 0.7518 | 0.8734 | 0.7006 | 0.7685 | 0.4292 | 0.7948 | 0.7190 | 0.7164 | | 0.1392 | 18.92 | 7000 | 0.5337 | 0.7193 | 0.8191 | 0.8855 | 0.9409 | 0.8197 | 0.8699 | 0.5424 | 0.9119 | 0.8666 | 0.7826 | 0.8728 | 0.6962 | 0.7700 | 0.4286 | 0.8044 | 0.7190 | 0.7442 | | 0.1355 | 18.97 | 7020 | 0.5454 | 0.7258 | 0.8283 | 0.8901 | 0.9425 | 0.8168 | 0.8598 | 0.5748 | 0.9060 | 0.8705 | 0.8279 | 0.8743 | 0.6964 | 0.7664 | 0.4196 | 0.8154 | 0.7179 | 0.7909 | | 0.1311 | 19.03 | 7040 | 0.5109 | 0.7277 | 0.8286 | 0.8915 | 0.9404 | 0.8328 | 0.8597 | 0.5547 | 0.9111 | 0.8634 | 0.8379 | 0.8740 | 0.6958 | 0.7679 | 0.4189 | 0.8177 | 0.7200 | 0.7995 | | 0.1482 | 19.08 | 7060 | 0.5200 | 0.7294 | 0.8293 | 0.8921 | 0.9414 | 0.8118 | 0.8670 | 0.5599 | 0.9054 | 0.8759 | 0.8438 | 0.8746 | 0.6977 | 0.7689 | 0.4223 | 0.8174 | 0.7217 | 0.8031 | | 0.1097 | 19.14 | 7080 | 0.5579 | 0.7150 | 0.8173 | 0.8821 | 0.9386 | 0.8159 | 0.8658 | 0.5675 | 0.9150 | 0.8629 | 0.7555 | 0.8742 | 0.6979 | 0.7698 | 0.4300 | 0.7960 | 0.7167 | 0.7205 | | 0.1646 | 19.19 | 7100 | 0.5838 | 0.7139 | 0.8143 | 0.8815 | 0.9400 | 0.8129 | 0.8633 | 0.5555 | 0.9159 | 0.8606 | 0.7518 | 0.8733 | 0.6975 | 0.7666 | 0.4313 | 0.7950 | 0.7159 | 0.7175 | | 0.0971 | 19.24 | 7120 | 0.5568 | 0.7135 | 0.8168 | 0.8816 | 0.9445 | 0.8272 | 0.8667 | 0.5603 | 0.9081 | 0.8585 | 0.7521 | 0.8726 | 0.6968 | 0.7699 | 0.4247 | 0.7961 | 0.7161 | 0.7186 | | 0.1318 | 19.3 | 7140 | 0.6116 | 0.7129 | 0.8136 | 0.8814 | 0.9422 | 0.8247 | 0.8791 | 0.5414 | 0.9141 | 0.8462 | 0.7472 | 0.8726 | 0.6966 | 0.7730 | 0.4221 | 0.7947 | 0.7176 | 0.7139 | | 0.0763 | 19.35 | 7160 | 0.5854 | 0.7149 | 0.8149 | 0.8826 | 0.9390 | 0.8274 | 0.8725 | 0.5344 | 0.9174 | 0.8576 | 0.7558 | 0.8733 | 0.6960 | 0.7707 | 0.4267 | 0.7965 | 0.7198 | 0.7214 | | 0.0788 | 19.41 | 7180 | 0.5573 | 0.7174 | 0.8149 | 0.8842 | 0.9420 | 0.8117 | 0.8622 | 0.5320 | 0.9159 | 0.8796 | 0.7609 | 0.8743 | 0.6977 | 0.7706 | 0.4311 | 0.8002 | 0.7221 | 0.7262 | | 0.1318 | 19.46 | 7200 | 0.6123 | 0.7159 | 0.8151 | 0.8830 | 0.9387 | 0.8202 | 0.8640 | 0.5288 | 0.9166 | 0.8842 | 0.7533 | 0.8745 | 0.6989 | 0.7705 | 0.4289 | 0.7975 | 0.7230 | 0.7179 | | 0.1114 | 19.51 | 7220 | 0.5467 | 0.7182 | 0.8213 | 0.8847 | 0.9439 | 0.8315 | 0.8570 | 0.5607 | 0.9081 | 0.8738 | 0.7739 | 0.8732 | 0.6985 | 0.7708 | 0.4228 | 0.8032 | 0.7214 | 0.7373 | | 0.07 | 19.57 | 7240 | 0.5694 | 0.7138 | 0.8176 | 0.8817 | 0.9386 | 0.8233 | 0.8625 | 0.5745 | 0.9159 | 0.8512 | 0.7575 | 0.8729 | 0.6989 | 0.7690 | 0.4191 | 0.7970 | 0.7168 | 0.7230 | | 0.1917 | 19.62 | 7260 | 0.5537 | 0.7195 | 0.8173 | 0.8859 | 0.9437 | 0.8205 | 0.8595 | 0.5427 | 0.9160 | 0.8555 | 0.7832 | 0.8731 | 0.7001 | 0.7673 | 0.4260 | 0.8061 | 0.7163 | 0.7480 | | 0.0786 | 19.68 | 7280 | 0.5282 | 0.7266 | 0.8297 | 0.8899 | 0.9354 | 0.8340 | 0.8717 | 0.5637 | 0.9123 | 0.8692 | 0.8213 | 0.8734 | 0.7018 | 0.7706 | 0.4258 | 0.8159 | 0.7188 | 0.7801 | | 0.1788 | 19.73 | 7300 | 0.5386 | 0.7247 | 0.8293 | 0.8889 | 0.9368 | 0.8228 | 0.8783 | 0.5741 | 0.9068 | 0.8675 | 0.8190 | 0.8735 | 0.7034 | 0.7712 | 0.4134 | 0.8130 | 0.7181 | 0.7804 | | 0.1096 | 19.78 | 7320 | 0.5480 | 0.7266 | 0.8267 | 0.8909 | 0.9418 | 0.8234 | 0.8574 | 0.5573 | 0.9114 | 0.8640 | 0.8312 | 0.8734 | 0.7014 | 0.7679 | 0.4138 | 0.8179 | 0.7169 | 0.7953 | | 1.5805 | 19.84 | 7340 | 0.5748 | 0.7261 | 0.8253 | 0.8906 | 0.9431 | 0.8027 | 0.8709 | 0.5585 | 0.9086 | 0.8687 | 0.8245 | 0.8736 | 0.6984 | 0.7696 | 0.4153 | 0.8171 | 0.7187 | 0.7903 | | 1.7115 | 19.89 | 7360 | 0.5969 | 0.7266 | 0.8303 | 0.8905 | 0.9425 | 0.8219 | 0.8725 | 0.5765 | 0.9044 | 0.8711 | 0.8233 | 0.8739 | 0.7003 | 0.7707 | 0.4141 | 0.8172 | 0.7201 | 0.7899 | | 0.0866 | 19.95 | 7380 | 0.5321 | 0.7292 | 0.8317 | 0.8920 | 0.9398 | 0.8248 | 0.8647 | 0.5692 | 0.9078 | 0.8798 | 0.8358 | 0.8751 | 0.7000 | 0.7697 | 0.4220 | 0.8199 | 0.7211 | 0.7965 | | 0.2194 | 20.0 | 7400 | 0.5505 | 0.7289 | 0.8303 | 0.8920 | 0.9425 | 0.8285 | 0.8649 | 0.5543 | 0.9055 | 0.8832 | 0.8333 | 0.8743 | 0.7002 | 0.7694 | 0.4222 | 0.8199 | 0.7208 | 0.7956 | | 0.1087 | 20.05 | 7420 | 0.5341 | 0.7300 | 0.8288 | 0.8929 | 0.9402 | 0.8256 | 0.8784 | 0.5351 | 0.9097 | 0.8758 | 0.8370 | 0.8742 | 0.7021 | 0.7717 | 0.4235 | 0.8221 | 0.7200 | 0.7968 | | 0.126 | 20.11 | 7440 | 0.5485 | 0.7276 | 0.8256 | 0.8913 | 0.9412 | 0.8205 | 0.8598 | 0.5495 | 0.9151 | 0.8677 | 0.8256 | 0.8742 | 0.7018 | 0.7672 | 0.4241 | 0.8185 | 0.7189 | 0.7885 | | 0.1614 | 20.16 | 7460 | 0.5406 | 0.7284 | 0.8246 | 0.8922 | 0.9410 | 0.8192 | 0.8571 | 0.5357 | 0.9163 | 0.8660 | 0.8369 | 0.8741 | 0.7034 | 0.7652 | 0.4218 | 0.8208 | 0.7172 | 0.7964 | | 0.0881 | 20.22 | 7480 | 0.5345 | 0.7282 | 0.8212 | 0.8927 | 0.9437 | 0.8140 | 0.8735 | 0.5126 | 0.9175 | 0.8540 | 0.8334 | 0.8737 | 0.7033 | 0.7695 | 0.4166 | 0.8223 | 0.7164 | 0.7957 | | 0.0598 | 20.27 | 7500 | 0.5295 | 0.7292 | 0.8264 | 0.8926 | 0.9387 | 0.8281 | 0.8747 | 0.5295 | 0.9167 | 0.8614 | 0.8358 | 0.8739 | 0.7031 | 0.7726 | 0.4197 | 0.8219 | 0.7171 | 0.7961 | | 0.2735 | 20.32 | 7520 | 0.5292 | 0.7294 | 0.8250 | 0.8929 | 0.9416 | 0.8190 | 0.8668 | 0.5309 | 0.9165 | 0.8646 | 0.8356 | 0.8740 | 0.7014 | 0.7724 | 0.4220 | 0.8230 | 0.7171 | 0.7958 | | 0.1385 | 20.38 | 7540 | 0.5413 | 0.7287 | 0.8247 | 0.8924 | 0.9373 | 0.8115 | 0.8671 | 0.5391 | 0.9209 | 0.8626 | 0.8344 | 0.8739 | 0.6977 | 0.7710 | 0.4253 | 0.8216 | 0.7153 | 0.7961 | | 0.0755 | 20.43 | 7560 | 0.5195 | 0.7286 | 0.8275 | 0.8927 | 0.9411 | 0.8251 | 0.8726 | 0.5339 | 0.9118 | 0.8729 | 0.8349 | 0.8744 | 0.6969 | 0.7717 | 0.4212 | 0.8233 | 0.7166 | 0.7965 | | 0.0906 | 20.49 | 7580 | 0.5124 | 0.7289 | 0.8287 | 0.8927 | 0.9415 | 0.8096 | 0.8768 | 0.5495 | 0.9084 | 0.8819 | 0.8332 | 0.8749 | 0.6968 | 0.7740 | 0.4196 | 0.8234 | 0.7184 | 0.7954 | | 0.042 | 20.54 | 7600 | 0.5236 | 0.7280 | 0.8265 | 0.8917 | 0.9388 | 0.8250 | 0.8719 | 0.5447 | 0.9173 | 0.8612 | 0.8264 | 0.8743 | 0.6990 | 0.7736 | 0.4233 | 0.8198 | 0.7185 | 0.7874 | | 0.07 | 20.59 | 7620 | 0.5167 | 0.7294 | 0.8273 | 0.8931 | 0.9423 | 0.8160 | 0.8627 | 0.5535 | 0.9156 | 0.8655 | 0.8354 | 0.8751 | 0.6981 | 0.7748 | 0.4189 | 0.8237 | 0.7189 | 0.7963 | | 0.3463 | 20.65 | 7640 | 0.5487 | 0.7202 | 0.8196 | 0.8867 | 0.9400 | 0.8278 | 0.8703 | 0.5256 | 0.9147 | 0.8751 | 0.7839 | 0.8750 | 0.7002 | 0.7753 | 0.4198 | 0.8069 | 0.7190 | 0.7451 | | 1.3278 | 20.7 | 7660 | 0.5206 | 0.7261 | 0.8251 | 0.8910 | 0.9440 | 0.8036 | 0.8725 | 0.5500 | 0.9080 | 0.8751 | 0.8224 | 0.8765 | 0.6975 | 0.7747 | 0.4158 | 0.8169 | 0.7185 | 0.7830 | | 0.0927 | 20.76 | 7680 | 0.5543 | 0.7179 | 0.8190 | 0.8849 | 0.9415 | 0.8215 | 0.8581 | 0.5538 | 0.9142 | 0.8695 | 0.7746 | 0.8764 | 0.6997 | 0.7696 | 0.4219 | 0.8024 | 0.7191 | 0.7364 | | 0.2095 | 20.81 | 7700 | 0.5537 | 0.7187 | 0.8184 | 0.8857 | 0.9393 | 0.8291 | 0.8589 | 0.5333 | 0.9178 | 0.8714 | 0.7791 | 0.8766 | 0.7009 | 0.7684 | 0.4209 | 0.8037 | 0.7186 | 0.7416 | | 0.1448 | 20.86 | 7720 | 0.5224 | 0.7306 | 0.8271 | 0.8936 | 0.9408 | 0.8154 | 0.8759 | 0.5356 | 0.9140 | 0.8660 | 0.8420 | 0.8764 | 0.7017 | 0.7709 | 0.4236 | 0.8226 | 0.7195 | 0.7999 | | 0.1207 | 20.92 | 7740 | 0.5309 | 0.7301 | 0.8263 | 0.8931 | 0.9410 | 0.8200 | 0.8619 | 0.5352 | 0.9152 | 0.8713 | 0.8396 | 0.8752 | 0.7022 | 0.7691 | 0.4258 | 0.8221 | 0.7186 | 0.7977 | | 0.1776 | 20.97 | 7760 | 0.5330 | 0.7285 | 0.8198 | 0.8934 | 0.9418 | 0.8182 | 0.8697 | 0.4839 | 0.9194 | 0.8639 | 0.8415 | 0.8740 | 0.7016 | 0.7706 | 0.4123 | 0.8227 | 0.7194 | 0.7989 | | 0.1048 | 21.03 | 7780 | 0.5440 | 0.7290 | 0.8230 | 0.8931 | 0.9448 | 0.8097 | 0.8735 | 0.5200 | 0.9125 | 0.8580 | 0.8422 | 0.8739 | 0.6989 | 0.7721 | 0.4185 | 0.8225 | 0.7175 | 0.7992 | | 0.1082 | 21.08 | 7800 | 0.5301 | 0.7297 | 0.8274 | 0.8932 | 0.9400 | 0.8203 | 0.8686 | 0.5293 | 0.9122 | 0.8801 | 0.8413 | 0.8759 | 0.7000 | 0.7707 | 0.4212 | 0.8222 | 0.7181 | 0.7997 | | 0.1496 | 21.14 | 7820 | 0.5470 | 0.7287 | 0.8299 | 0.8926 | 0.9427 | 0.8256 | 0.8717 | 0.5486 | 0.9066 | 0.8808 | 0.8333 | 0.8759 | 0.6996 | 0.7722 | 0.4174 | 0.8217 | 0.7175 | 0.7969 | | 0.0932 | 21.19 | 7840 | 0.5195 | 0.7300 | 0.8295 | 0.8933 | 0.9406 | 0.8178 | 0.8741 | 0.5493 | 0.9109 | 0.8735 | 0.8402 | 0.8766 | 0.7004 | 0.7741 | 0.4181 | 0.8228 | 0.7193 | 0.7990 | | 0.1466 | 21.24 | 7860 | 0.5228 | 0.7288 | 0.8293 | 0.8921 | 0.9391 | 0.8230 | 0.8679 | 0.5565 | 0.9131 | 0.8748 | 0.8304 | 0.8764 | 0.6989 | 0.7710 | 0.4269 | 0.8202 | 0.7167 | 0.7915 | | 0.0394 | 21.3 | 7880 | 0.5633 | 0.7241 | 0.8231 | 0.8886 | 0.9386 | 0.8253 | 0.8721 | 0.5429 | 0.9158 | 0.8617 | 0.8054 | 0.8757 | 0.7003 | 0.7688 | 0.4307 | 0.8103 | 0.7161 | 0.7666 | | 0.1345 | 21.35 | 7900 | 0.5410 | 0.7306 | 0.8282 | 0.8932 | 0.9397 | 0.8250 | 0.8754 | 0.5366 | 0.9131 | 0.8665 | 0.8408 | 0.8755 | 0.7008 | 0.7683 | 0.4301 | 0.8215 | 0.7172 | 0.8007 | | 0.1801 | 21.41 | 7920 | 0.5387 | 0.7290 | 0.8278 | 0.8923 | 0.9413 | 0.8294 | 0.8634 | 0.5482 | 0.9129 | 0.8609 | 0.8388 | 0.8738 | 0.6983 | 0.7694 | 0.4262 | 0.8216 | 0.7163 | 0.7973 | | 0.1575 | 21.46 | 7940 | 0.5452 | 0.7284 | 0.8204 | 0.8928 | 0.9441 | 0.7959 | 0.8741 | 0.5224 | 0.9173 | 0.8516 | 0.8372 | 0.8734 | 0.6961 | 0.7709 | 0.4232 | 0.8224 | 0.7152 | 0.7974 | | 0.0942 | 21.51 | 7960 | 0.5162 | 0.7301 | 0.8321 | 0.8932 | 0.9397 | 0.8254 | 0.8756 | 0.5561 | 0.9070 | 0.8756 | 0.8452 | 0.8771 | 0.6995 | 0.7750 | 0.4166 | 0.8214 | 0.7190 | 0.8022 | | 0.1019 | 21.57 | 7980 | 0.5306 | 0.7286 | 0.8285 | 0.8924 | 0.9416 | 0.8238 | 0.8743 | 0.5455 | 0.9090 | 0.8676 | 0.8375 | 0.8748 | 0.7000 | 0.7735 | 0.4162 | 0.8208 | 0.7178 | 0.7974 | | 0.0743 | 21.62 | 8000 | 0.5440 | 0.7282 | 0.8264 | 0.8921 | 0.9427 | 0.8218 | 0.8664 | 0.5419 | 0.9118 | 0.8679 | 0.8320 | 0.8748 | 0.7005 | 0.7706 | 0.4207 | 0.8202 | 0.7173 | 0.7933 | | 0.1217 | 21.68 | 8020 | 0.5120 | 0.7295 | 0.8311 | 0.8927 | 0.9427 | 0.8248 | 0.8734 | 0.5578 | 0.9046 | 0.8748 | 0.8400 | 0.8754 | 0.7003 | 0.7741 | 0.4189 | 0.8219 | 0.7183 | 0.7975 | | 0.1475 | 21.73 | 8040 | 0.5327 | 0.7275 | 0.8246 | 0.8919 | 0.9418 | 0.8002 | 0.8690 | 0.5502 | 0.9134 | 0.8645 | 0.8334 | 0.8755 | 0.6968 | 0.7715 | 0.4204 | 0.8197 | 0.7174 | 0.7913 | | 0.1826 | 21.78 | 8060 | 0.5325 | 0.7262 | 0.8273 | 0.8908 | 0.9416 | 0.8221 | 0.8712 | 0.5580 | 0.9108 | 0.8645 | 0.8227 | 0.8756 | 0.6983 | 0.7754 | 0.4152 | 0.8174 | 0.7178 | 0.7840 | | 0.0675 | 21.84 | 8080 | 0.5753 | 0.7282 | 0.8306 | 0.8921 | 0.9397 | 0.8173 | 0.8692 | 0.5628 | 0.9076 | 0.8811 | 0.8362 | 0.8760 | 0.6965 | 0.7727 | 0.4177 | 0.8204 | 0.7172 | 0.7972 | | 0.0812 | 21.89 | 8100 | 0.5417 | 0.7295 | 0.8309 | 0.8926 | 0.9372 | 0.8265 | 0.8756 | 0.5521 | 0.9108 | 0.8727 | 0.8413 | 0.8749 | 0.6972 | 0.7732 | 0.4221 | 0.8216 | 0.7196 | 0.7983 | | 0.1776 | 21.95 | 8120 | 0.5303 | 0.7287 | 0.8295 | 0.8921 | 0.9402 | 0.8105 | 0.8765 | 0.5650 | 0.9096 | 0.8716 | 0.8329 | 0.8756 | 0.6965 | 0.7742 | 0.4219 | 0.8203 | 0.7202 | 0.7924 | | 0.119 | 22.0 | 8140 | 0.5318 | 0.7263 | 0.8259 | 0.8904 | 0.9404 | 0.8202 | 0.8735 | 0.5467 | 0.9111 | 0.8700 | 0.8195 | 0.8752 | 0.6995 | 0.7739 | 0.4228 | 0.8153 | 0.7206 | 0.7768 | | 0.0873 | 22.05 | 8160 | 0.5227 | 0.7298 | 0.8328 | 0.8926 | 0.9396 | 0.8222 | 0.8697 | 0.5743 | 0.9073 | 0.8764 | 0.8402 | 0.8753 | 0.7002 | 0.7743 | 0.4176 | 0.8221 | 0.7213 | 0.7978 | | 0.0634 | 22.11 | 8180 | 0.5273 | 0.7281 | 0.8285 | 0.8919 | 0.9413 | 0.7973 | 0.8735 | 0.5803 | 0.9108 | 0.8627 | 0.8335 | 0.8752 | 0.6951 | 0.7736 | 0.4190 | 0.8209 | 0.7190 | 0.7940 | | 0.1767 | 22.16 | 8200 | 0.5275 | 0.7300 | 0.8306 | 0.8928 | 0.9388 | 0.8289 | 0.8729 | 0.5560 | 0.9123 | 0.8680 | 0.8378 | 0.8745 | 0.7004 | 0.7738 | 0.4224 | 0.8225 | 0.7194 | 0.7969 | | 0.1508 | 22.22 | 8220 | 0.5532 | 0.7289 | 0.8291 | 0.8921 | 0.9385 | 0.8246 | 0.8679 | 0.5526 | 0.9118 | 0.8693 | 0.8388 | 0.8738 | 0.6998 | 0.7710 | 0.4227 | 0.8205 | 0.7178 | 0.7968 | | 0.0866 | 22.27 | 8240 | 0.5304 | 0.7303 | 0.8280 | 0.8930 | 0.9380 | 0.8240 | 0.8680 | 0.5412 | 0.9161 | 0.8676 | 0.8415 | 0.8747 | 0.7010 | 0.7729 | 0.4240 | 0.8219 | 0.7197 | 0.7978 | | 0.1141 | 22.32 | 8260 | 0.5599 | 0.7296 | 0.8305 | 0.8925 | 0.9413 | 0.8189 | 0.8704 | 0.5594 | 0.9070 | 0.8785 | 0.8376 | 0.8754 | 0.6997 | 0.7724 | 0.4208 | 0.8206 | 0.7200 | 0.7979 | | 0.5142 | 22.38 | 8280 | 0.5377 | 0.7294 | 0.8281 | 0.8926 | 0.9413 | 0.8232 | 0.8723 | 0.5388 | 0.9106 | 0.8792 | 0.8310 | 0.8753 | 0.7005 | 0.7727 | 0.4233 | 0.8208 | 0.7195 | 0.7935 | | 0.1262 | 22.43 | 8300 | 0.5415 | 0.7208 | 0.8213 | 0.8865 | 0.9406 | 0.8204 | 0.8620 | 0.5525 | 0.9135 | 0.8714 | 0.7885 | 0.8750 | 0.6993 | 0.7717 | 0.4248 | 0.8064 | 0.7194 | 0.7492 | | 0.0996 | 22.49 | 8320 | 0.5172 | 0.7284 | 0.8277 | 0.8919 | 0.9414 | 0.8244 | 0.8787 | 0.5501 | 0.9114 | 0.8573 | 0.8309 | 0.8752 | 0.7005 | 0.7751 | 0.4204 | 0.8189 | 0.7196 | 0.7891 | | 0.099 | 22.54 | 8340 | 0.5433 | 0.7298 | 0.8289 | 0.8927 | 0.9408 | 0.8299 | 0.8624 | 0.5486 | 0.9124 | 0.8681 | 0.8401 | 0.8751 | 0.7008 | 0.7688 | 0.4232 | 0.8205 | 0.7204 | 0.7997 | | 0.0542 | 22.59 | 8360 | 0.5318 | 0.7291 | 0.8300 | 0.8922 | 0.9415 | 0.8288 | 0.8653 | 0.5631 | 0.9094 | 0.8622 | 0.8401 | 0.8746 | 0.6998 | 0.7707 | 0.4204 | 0.8199 | 0.7200 | 0.7983 | | 0.107 | 22.65 | 8380 | 0.5395 | 0.7293 | 0.8302 | 0.8924 | 0.9412 | 0.8340 | 0.8674 | 0.5524 | 0.9095 | 0.8701 | 0.8367 | 0.8749 | 0.7002 | 0.7702 | 0.4215 | 0.8205 | 0.7204 | 0.7977 | | 0.1479 | 22.7 | 8400 | 0.5555 | 0.7297 | 0.8283 | 0.8928 | 0.9433 | 0.8251 | 0.8648 | 0.5496 | 0.9110 | 0.8681 | 0.8365 | 0.8746 | 0.6999 | 0.7724 | 0.4213 | 0.8219 | 0.7201 | 0.7974 | | 0.042 | 22.76 | 8420 | 0.5373 | 0.7259 | 0.8257 | 0.8900 | 0.9398 | 0.8162 | 0.8614 | 0.5572 | 0.9135 | 0.8759 | 0.8161 | 0.8753 | 0.6993 | 0.7712 | 0.4241 | 0.8147 | 0.7199 | 0.7769 | | 1.2058 | 22.81 | 8440 | 0.5498 | 0.7249 | 0.8244 | 0.8896 | 0.9424 | 0.8223 | 0.8755 | 0.5537 | 0.9130 | 0.8528 | 0.8112 | 0.8738 | 0.6991 | 0.7754 | 0.4188 | 0.8148 | 0.7187 | 0.7736 | | 0.0554 | 22.86 | 8460 | 0.5314 | 0.7270 | 0.8267 | 0.8910 | 0.9414 | 0.8247 | 0.8741 | 0.5495 | 0.9117 | 0.8640 | 0.8216 | 0.8742 | 0.6997 | 0.7747 | 0.4202 | 0.8176 | 0.7213 | 0.7815 | | 0.171 | 22.92 | 8480 | 0.5404 | 0.7260 | 0.8255 | 0.8904 | 0.9416 | 0.8163 | 0.8787 | 0.5466 | 0.9099 | 0.8679 | 0.8173 | 0.8745 | 0.6992 | 0.7735 | 0.4193 | 0.8160 | 0.7215 | 0.7777 | | 0.2158 | 22.97 | 8500 | 0.5370 | 0.7286 | 0.8289 | 0.8922 | 0.9415 | 0.8259 | 0.8658 | 0.5471 | 0.9091 | 0.8797 | 0.8330 | 0.8750 | 0.6999 | 0.7725 | 0.4179 | 0.8207 | 0.7215 | 0.7925 | | 1.52 | 23.03 | 8520 | 0.5842 | 0.7270 | 0.8247 | 0.8914 | 0.9417 | 0.8265 | 0.8659 | 0.5366 | 0.9155 | 0.8610 | 0.8258 | 0.8743 | 0.6995 | 0.7703 | 0.4172 | 0.8186 | 0.7198 | 0.7890 | | 0.529 | 23.08 | 8540 | 0.5676 | 0.7283 | 0.8253 | 0.8923 | 0.9416 | 0.8326 | 0.8613 | 0.5270 | 0.9158 | 0.8643 | 0.8343 | 0.8738 | 0.6996 | 0.7691 | 0.4192 | 0.8207 | 0.7197 | 0.7960 | | 0.1741 | 23.14 | 8560 | 0.5727 | 0.7221 | 0.8205 | 0.8879 | 0.9415 | 0.8098 | 0.8613 | 0.5479 | 0.9152 | 0.8696 | 0.7984 | 0.8745 | 0.6984 | 0.7692 | 0.4202 | 0.8103 | 0.7188 | 0.7631 | | 0.1051 | 23.19 | 8580 | 0.5467 | 0.7276 | 0.8296 | 0.8913 | 0.9389 | 0.8303 | 0.8751 | 0.5555 | 0.9109 | 0.8711 | 0.8250 | 0.8750 | 0.6998 | 0.7760 | 0.4177 | 0.8184 | 0.7212 | 0.7850 | | 0.1127 | 23.24 | 8600 | 0.5468 | 0.7260 | 0.8249 | 0.8904 | 0.9407 | 0.8258 | 0.8640 | 0.5454 | 0.9146 | 0.8641 | 0.8196 | 0.8748 | 0.7002 | 0.7715 | 0.4207 | 0.8156 | 0.7195 | 0.7796 | | 0.17 | 23.3 | 8620 | 0.5703 | 0.7214 | 0.8138 | 0.8882 | 0.9441 | 0.8009 | 0.8589 | 0.5086 | 0.9188 | 0.8665 | 0.7993 | 0.8735 | 0.6976 | 0.7684 | 0.4193 | 0.8102 | 0.7193 | 0.7613 | | 0.0949 | 23.35 | 8640 | 0.5267 | 0.7298 | 0.8273 | 0.8934 | 0.9404 | 0.8271 | 0.8779 | 0.5269 | 0.9136 | 0.8633 | 0.8415 | 0.8755 | 0.7009 | 0.7761 | 0.4150 | 0.8221 | 0.7202 | 0.7990 | | 1.5673 | 23.41 | 8660 | 0.5401 | 0.7268 | 0.8274 | 0.8911 | 0.9420 | 0.8186 | 0.8733 | 0.5551 | 0.9094 | 0.8699 | 0.8240 | 0.8759 | 0.6996 | 0.7749 | 0.4144 | 0.8171 | 0.7199 | 0.7859 | | 0.5701 | 23.46 | 8680 | 0.5517 | 0.7261 | 0.8286 | 0.8905 | 0.9412 | 0.8296 | 0.8681 | 0.5617 | 0.9091 | 0.8678 | 0.8223 | 0.8753 | 0.6996 | 0.7738 | 0.4140 | 0.8161 | 0.7191 | 0.7845 | | 0.1587 | 23.51 | 8700 | 0.5709 | 0.7235 | 0.8183 | 0.8894 | 0.9442 | 0.8075 | 0.8675 | 0.5310 | 0.9175 | 0.8494 | 0.8108 | 0.8736 | 0.6984 | 0.7708 | 0.4167 | 0.8137 | 0.7158 | 0.7756 | | 0.1153 | 23.57 | 8720 | 0.5978 | 0.7220 | 0.8225 | 0.8878 | 0.9406 | 0.8162 | 0.8667 | 0.5568 | 0.9136 | 0.8619 | 0.8018 | 0.8750 | 0.6997 | 0.7707 | 0.4151 | 0.8095 | 0.7182 | 0.7659 | | 0.1148 | 23.62 | 8740 | 0.5372 | 0.7284 | 0.8294 | 0.8921 | 0.9412 | 0.8239 | 0.8748 | 0.5533 | 0.9077 | 0.8684 | 0.8366 | 0.8752 | 0.6995 | 0.7751 | 0.4134 | 0.8196 | 0.7194 | 0.7963 | | 0.2087 | 23.68 | 8760 | 0.5470 | 0.7285 | 0.8302 | 0.8920 | 0.9373 | 0.8269 | 0.8707 | 0.5554 | 0.9108 | 0.8709 | 0.8395 | 0.8751 | 0.6990 | 0.7723 | 0.4165 | 0.8190 | 0.7194 | 0.7982 | | 0.1216 | 23.73 | 8780 | 0.5484 | 0.7280 | 0.8299 | 0.8916 | 0.9382 | 0.8336 | 0.8610 | 0.5569 | 0.9117 | 0.8697 | 0.8380 | 0.8754 | 0.6995 | 0.7699 | 0.4164 | 0.8179 | 0.7188 | 0.7979 | | 0.098 | 23.78 | 8800 | 0.5326 | 0.7289 | 0.8286 | 0.8924 | 0.9391 | 0.8182 | 0.8663 | 0.5531 | 0.9118 | 0.8679 | 0.8436 | 0.8758 | 0.6996 | 0.7723 | 0.4154 | 0.8195 | 0.7197 | 0.8001 | | 0.0423 | 23.84 | 8820 | 0.5410 | 0.7278 | 0.8294 | 0.8917 | 0.9413 | 0.8324 | 0.8598 | 0.5582 | 0.9090 | 0.8654 | 0.8397 | 0.8746 | 0.6993 | 0.7712 | 0.4144 | 0.8190 | 0.7185 | 0.7977 | | 0.0599 | 23.89 | 8840 | 0.5354 | 0.7263 | 0.8275 | 0.8908 | 0.9418 | 0.8331 | 0.8599 | 0.5525 | 0.9103 | 0.8680 | 0.8272 | 0.8744 | 0.6991 | 0.7719 | 0.4149 | 0.8170 | 0.7184 | 0.7886 | | 0.1217 | 23.95 | 8860 | 0.5390 | 0.7278 | 0.8243 | 0.8921 | 0.9395 | 0.8286 | 0.8653 | 0.5150 | 0.9166 | 0.8740 | 0.8311 | 0.8750 | 0.7013 | 0.7723 | 0.4158 | 0.8192 | 0.7193 | 0.7916 | | 0.1271 | 24.0 | 8880 | 0.5720 | 0.7230 | 0.8223 | 0.8885 | 0.9396 | 0.8296 | 0.8644 | 0.5324 | 0.9142 | 0.8678 | 0.8078 | 0.8749 | 0.7004 | 0.7687 | 0.4200 | 0.8101 | 0.7180 | 0.7691 | | 0.8749 | 24.05 | 8900 | 0.5618 | 0.7229 | 0.8224 | 0.8885 | 0.9406 | 0.8369 | 0.8630 | 0.5319 | 0.9148 | 0.8648 | 0.8049 | 0.8747 | 0.7003 | 0.7712 | 0.4194 | 0.8108 | 0.7179 | 0.7660 | | 0.1267 | 24.11 | 8920 | 0.5620 | 0.7189 | 0.8196 | 0.8855 | 0.9403 | 0.8298 | 0.8642 | 0.5377 | 0.9132 | 0.8705 | 0.7815 | 0.8750 | 0.6998 | 0.7708 | 0.4212 | 0.8035 | 0.7181 | 0.7437 | | 0.0353 | 24.16 | 8940 | 0.5602 | 0.7189 | 0.8200 | 0.8855 | 0.9423 | 0.8229 | 0.8692 | 0.5499 | 0.9107 | 0.8636 | 0.7818 | 0.8745 | 0.6997 | 0.7715 | 0.4203 | 0.8038 | 0.7180 | 0.7448 | | 0.0954 | 24.22 | 8960 | 0.5401 | 0.7248 | 0.8285 | 0.8894 | 0.9394 | 0.8352 | 0.8624 | 0.5651 | 0.9097 | 0.8692 | 0.8182 | 0.8758 | 0.6992 | 0.7730 | 0.4175 | 0.8130 | 0.7193 | 0.7760 | | 0.1345 | 24.27 | 8980 | 0.5673 | 0.7198 | 0.8213 | 0.8861 | 0.9410 | 0.8262 | 0.8680 | 0.5563 | 0.9134 | 0.8585 | 0.7854 | 0.8746 | 0.6991 | 0.7717 | 0.4203 | 0.8054 | 0.7180 | 0.7495 | | 0.182 | 24.32 | 9000 | 0.5590 | 0.7199 | 0.8217 | 0.8861 | 0.9403 | 0.8270 | 0.8610 | 0.5642 | 0.9147 | 0.8526 | 0.7916 | 0.8746 | 0.6987 | 0.7706 | 0.4196 | 0.8054 | 0.7174 | 0.7530 | | 0.0283 | 24.38 | 9020 | 0.5669 | 0.7185 | 0.8202 | 0.8851 | 0.9408 | 0.8212 | 0.8673 | 0.5536 | 0.9121 | 0.8696 | 0.7770 | 0.8747 | 0.6992 | 0.7720 | 0.4202 | 0.8031 | 0.7189 | 0.7413 | | 0.1126 | 24.43 | 9040 | 0.5652 | 0.7233 | 0.8221 | 0.8886 | 0.9419 | 0.8274 | 0.8676 | 0.5325 | 0.9121 | 0.8682 | 0.8051 | 0.8740 | 0.7003 | 0.7725 | 0.4194 | 0.8111 | 0.7195 | 0.7659 | | 0.1226 | 24.49 | 9060 | 0.5617 | 0.7205 | 0.8208 | 0.8866 | 0.9417 | 0.8152 | 0.8606 | 0.5532 | 0.9116 | 0.8708 | 0.7928 | 0.8749 | 0.6993 | 0.7694 | 0.4200 | 0.8063 | 0.7191 | 0.7547 | | 0.1244 | 24.54 | 9080 | 0.5755 | 0.7207 | 0.8229 | 0.8865 | 0.9391 | 0.8244 | 0.8632 | 0.5605 | 0.9126 | 0.8655 | 0.7949 | 0.8748 | 0.6991 | 0.7693 | 0.4193 | 0.8059 | 0.7196 | 0.7567 | | 0.0556 | 24.59 | 9100 | 0.5516 | 0.7217 | 0.8259 | 0.8872 | 0.9387 | 0.8315 | 0.8668 | 0.5669 | 0.9103 | 0.8669 | 0.8002 | 0.8752 | 0.6991 | 0.7720 | 0.4162 | 0.8079 | 0.7197 | 0.7617 | | 0.1242 | 24.65 | 9120 | 0.5604 | 0.7243 | 0.8230 | 0.8893 | 0.9423 | 0.8217 | 0.8642 | 0.5392 | 0.9121 | 0.8700 | 0.8113 | 0.8745 | 0.6998 | 0.7707 | 0.4201 | 0.8128 | 0.7196 | 0.7727 | | 0.2209 | 24.7 | 9140 | 0.5639 | 0.7218 | 0.8229 | 0.8876 | 0.9436 | 0.8224 | 0.8605 | 0.5561 | 0.9094 | 0.8689 | 0.7992 | 0.8741 | 0.6989 | 0.7700 | 0.4189 | 0.8094 | 0.7187 | 0.7628 | | 0.1885 | 24.76 | 9160 | 0.5806 | 0.7212 | 0.8184 | 0.8873 | 0.9400 | 0.8232 | 0.8684 | 0.5415 | 0.9211 | 0.8368 | 0.7980 | 0.8741 | 0.6996 | 0.7702 | 0.4207 | 0.8076 | 0.7150 | 0.7609 | | 0.0536 | 24.81 | 9180 | 0.5671 | 0.7228 | 0.8232 | 0.8883 | 0.9408 | 0.8333 | 0.8617 | 0.5446 | 0.9136 | 0.8618 | 0.8064 | 0.8742 | 0.6989 | 0.7693 | 0.4201 | 0.8104 | 0.7190 | 0.7678 | | 0.1423 | 24.86 | 9200 | 0.5636 | 0.7233 | 0.8210 | 0.8887 | 0.9412 | 0.8229 | 0.8660 | 0.5305 | 0.9146 | 0.8657 | 0.8061 | 0.8743 | 0.7000 | 0.7699 | 0.4206 | 0.8109 | 0.7196 | 0.7676 | | 0.0816 | 24.92 | 9220 | 0.5686 | 0.7222 | 0.8216 | 0.8880 | 0.9433 | 0.8247 | 0.8596 | 0.5472 | 0.9134 | 0.8637 | 0.7993 | 0.8741 | 0.6991 | 0.7707 | 0.4196 | 0.8104 | 0.7191 | 0.7627 | | 0.132 | 24.97 | 9240 | 0.5503 | 0.7259 | 0.8239 | 0.8905 | 0.9420 | 0.8275 | 0.8697 | 0.5254 | 0.9118 | 0.8725 | 0.8182 | 0.8743 | 0.7005 | 0.7725 | 0.4188 | 0.8159 | 0.7204 | 0.7786 | ### Framework versions - Transformers 4.33.0 - Pytorch 2.0.0 - Datasets 2.17.1 - Tokenizers 0.13.3
Imadken/llama-7b-chat-lamini_docs
Imadken
2024-02-22T21:00:50Z
0
0
peft
[ "peft", "region:us" ]
null
2024-02-22T20:57:27Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0
goxai/gemm
goxai
2024-02-22T21:00:31Z
9
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:2312.11805", "arxiv:2009.03300", "arxiv:1905.07830", "arxiv:1911.11641", "arxiv:1904.09728", "arxiv:1905.10044", "arxiv:1907.10641", "arxiv:1811.00937", "arxiv:1809.02789", "arxiv:1911.01547", "arxiv:1705.03551", "arxiv:2107.03374", "arxiv:2108.07732", "arxiv:2110.14168", "arxiv:2304.06364", "arxiv:2206.04615", "arxiv:1804.06876", "arxiv:2110.08193", "arxiv:2009.11462", "arxiv:2101.11718", "arxiv:1804.09301", "arxiv:2109.07958", "arxiv:2203.09509", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-22T19:53:10Z
--- library_name: transformers tags: [] widget: - text: | <start_of_turn>user How does the brain work?<end_of_turn> <start_of_turn>model inference: parameters: max_new_tokens: 200 extra_gated_heading: "Access Gemma on Hugging Face" extra_gated_prompt: "To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately." extra_gated_button_content: "Acknowledge license" license: other license_name: gemma-terms-of-use license_link: https://ai.google.dev/gemma/terms --- # Gemma Model Card **Model Page**: [Gemma](https://ai.google.dev/gemma/docs) This model card corresponds to the 7B instruct version of the Gemma model. You can also visit the model card of the [2B base model](https://huggingface.co/google/gemma-2b), [7B base model](https://huggingface.co/google/gemma-7b), and [2B instruct model](https://huggingface.co/google/gemma-2b-it). **Resources and Technical Documentation**: * [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) * [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma) * [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335?version=gemma-7b-it-gg-hf) **Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent) **Authors**: Google ## Model Information Summary description and brief definition of inputs and outputs. ### Description Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ### Usage Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase. #### Fine-tuning the model You can find fine-tuning scripts and notebook under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples) of [`google/gemma-7b`](https://huggingface.co/google/gemma-7b) repository. To adapt it to this model, simply change the model-id to `google/gemma-7b-it`. In that repository, we provide: * A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using QLoRA * A script to perform SFT using FSDP on TPU devices * A notebook that you can run on a free-tier Google Colab instance to perform SFT on English quotes dataset #### Running the model on a CPU ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Running the model on a single / multi GPU ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", device_map="auto") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Running the model on a GPU using different precisions * _Using `torch.float16`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", device_map="auto", torch_dtype=torch.float16) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` * _Using `torch.bfloat16`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", device_map="auto", torch_dtype=torch.bfloat16) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Quantized Versions through `bitsandbytes` * _Using 8-bit precision (int8)_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", quantization_config=quantization_config) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` * _Using 4-bit precision_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_4bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", quantization_config=quantization_config) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Other optimizations * _Flash Attention 2_ First make sure to install `flash-attn` in your environment `pip install flash-attn` ```diff model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.float16, + attn_implementation="flash_attention_2" ).to(0) ``` ### Chat Template The instruction-tuned models use a chat template that must be adhered to for conversational use. The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet. Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction: ```py from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model_id = "gg-hf/gemma-7b-it" dtype = torch.bfloat16 tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, device_map="cuda", torch_dtype=dtype, ) chat = [ { "role": "user", "content": "Write a hello world program" }, ] prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) ``` At this point, the prompt contains the following text: ``` <start_of_turn>user Write a hello world program<end_of_turn> <start_of_turn>model ``` As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity (either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with the `<end_of_turn>` token. You can follow this format to build the prompt manually, if you need to do it without the tokenizer's chat template. After the prompt is ready, generation can be performed like this: ```py inputs = tokenizer.encode(prompt, add_special_tokens=True, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150) print(tokenizer.decode(outputs[0])) ``` ### Inputs and outputs * **Input:** Text string, such as a question, a prompt, or a document to be summarized. * **Output:** Generated English-language text in response to the input, such as an answer to a question, or a summary of a document. ## Model Data Data used for model training and how the data was processed. ### Training Dataset These models were trained on a dataset of text data that includes a wide variety of sources, totaling 6 trillion tokens. Here are the key components: * Web Documents: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. Primarily English-language content. * Code: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code or understand code-related questions. * Mathematics: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries. The combination of these diverse data sources is crucial for training a powerful language model that can handle a wide variety of different tasks and text formats. ### Data Preprocessing Here are the key data cleaning and filtering methods applied to the training data: * CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content * Sensitive Data Filtering: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets. * Additional methods: Filtering based on content quality and safely in line with [our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11). ## Implementation Information Details about the model internals. ### Hardware Gemma was trained using the latest generation of [Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e). Training large language models requires significant computational power. TPUs, designed specifically for matrix operations common in machine learning, offer several advantages in this domain: * Performance: TPUs are specifically designed to handle the massive computations involved in training LLMs. They can speed up training considerably compared to CPUs. * Memory: TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training. This can lead to better model quality. * Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for handling the growing complexity of large foundation models. You can distribute training across multiple TPU devices for faster and more efficient processing. * Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective solution for training large models compared to CPU-based infrastructure, especially when considering the time and resources saved due to faster training. * These advantages are aligned with [Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/). ### Software Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture). JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. ML Pathways is Google's latest effort to build artificially intelligent systems capable of generalizing across multiple tasks. This is specially suitable for [foundation models](https://ai.google/discover/foundation-models/), including large language models like these ones. Together, JAX and ML Pathways are used as described in the [paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single controller' programming model of Jax and Pathways allows a single Python process to orchestrate the entire training run, dramatically simplifying the development workflow." ## Evaluation Model evaluation metrics and results. ### Benchmark Results These models were evaluated against a large collection of different datasets and metrics to cover different aspects of text generation: | Benchmark | Metric | 2B Params | 7B Params | | ------------------------------ | ------------- | ----------- | --------- | | [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 | | [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 | | [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 | | [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 59.7 | 51.8 | | [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 | | [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 | | [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 | | [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 | | [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 | | [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 | | [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 | | [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | - | 23 | | [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 | | [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 | | [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 | | [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 | | [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 | | [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 | | ------------------------------ | ------------- | ----------- | --------- | | **Average** | | **54.0** | **56.4** | ## Ethics and Safety Ethics and safety evaluation approach and results. ### Evaluation Approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: * Text-to-Text Content Safety: Human evaluation on prompts covering safety policies including child sexual abuse and exploitation, harassment, violence and gore, and hate speech. * Text-to-Text Representational Harms: Benchmark against relevant academic datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2). * Memorization: Automated evaluation of memorization of training data, including the risk of personally identifiable information exposure. * Large-scale harm: Tests for "dangerous capabilities," such as chemical, biological, radiological, and nuclear (CBRN) risks. ### Evaluation Results The results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child safety, content safety, representational harms, memorization, large-scale harms. On top of robust internal evaluations, the results of well known safety benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA are shown here. | Benchmark | Metric | 2B Params | 7B Params | | ------------------------------ | ------------- | ----------- | --------- | | [RealToxicity](https://arxiv.org/abs/2009.11462) | average | 6.86 | 7.90 | | [BOLD](https://arxiv.org/abs/2101.11718) | | 45.57 | 49.08 | | [CrowS-Pairs](https://aclanthology.org/2020.emnlp-main.154/) | top-1 | 45.82 | 51.33 | | [BBQ Ambig](https://arxiv.org/abs/2110.08193v2) | 1-shot, top-1 | 62.58 | 92.54 | | [BBQ Disambig](https://arxiv.org/abs/2110.08193v2) | top-1 | 54.62 | 71.99 | | [Winogender](https://arxiv.org/abs/1804.09301) | top-1 | 51.25 | 54.17 | | [TruthfulQA](https://arxiv.org/abs/2109.07958) | | 44.84 | 31.81 | | [Winobias 1_2](https://arxiv.org/abs/1804.06876) | | 56.12 | 59.09 | | [Winobias 2_2](https://arxiv.org/abs/1804.06876) | | 91.10 | 92.23 | | [Toxigen](https://arxiv.org/abs/2203.09509) | | 29.77 | 39.59 | | ------------------------------ | ------------- | ----------- | --------- | ## Usage and Limitations These models have certain limitations that users should be aware of. ### Intended Usage Open Large Language Models (LLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. * Content Creation and Communication * Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts. * Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. * Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. * Research and Education * Natural Language Processing (NLP) Research: These models can serve as a foundation for researchers to experiment with NLP techniques, develop algorithms, and contribute to the advancement of the field. * Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice. * Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics. ### Limitations * Training Data * The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses. * The scope of the training dataset determines the subject areas the model can handle effectively. * Context and Task Complexity * LLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point). * Language Ambiguity and Nuance * Natural language is inherently complex. LLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * Factual Accuracy * LLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * Common Sense * LLMs rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations. ### Ethical Considerations and Risks The development of large language models (LLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * LLMs trained on large-scale, real-world text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. * Misinformation and Misuse * LLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible). * Transparency and Accountability: * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making LLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * Perpetuation of biases: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * Generation of harmful content: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * Misuse for malicious purposes: Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy). * Privacy violations: Models were trained on data filtered for removal of PII (Personally Identifiable Information). Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Benefits At the time of release, this family of models provides high-performance open large language model implementations designed from the ground up for Responsible AI development compared to similarly sized models. Using the benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives.
guirnd/ppo-LunarLander-v2-unit1
guirnd
2024-02-22T20:54:22Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-02-22T20:54:07Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -770.17 +/- 286.81 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
newyorksteak/bert-finetuned-squad
newyorksteak
2024-02-22T20:48:52Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "question-answering", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2024-02-22T18:35:31Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer model-index: - name: bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
princedl/ml6team-gpt2-small-german-finetune-oscar-1file
princedl
2024-02-22T20:46:39Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:ml6team/gpt2-small-german-finetune-oscar", "base_model:finetune:ml6team/gpt2-small-german-finetune-oscar", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-22T20:20:34Z
--- base_model: ml6team/gpt2-small-german-finetune-oscar tags: - generated_from_trainer model-index: - name: ml6team-gpt2-small-german-finetune-oscar-1file results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ml6team-gpt2-small-german-finetune-oscar-1file This model is a fine-tuned version of [ml6team/gpt2-small-german-finetune-oscar](https://huggingface.co/ml6team/gpt2-small-german-finetune-oscar) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.5331 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.4429 | 1.0 | 257 | 3.5679 | | 3.3122 | 2.0 | 514 | 3.5112 | | 3.3409 | 3.0 | 771 | 3.5015 | | 3.0541 | 4.0 | 1028 | 3.5065 | | 2.9332 | 5.0 | 1285 | 3.5201 | | 2.4331 | 6.0 | 1542 | 3.5331 | ### Framework versions - Transformers 4.38.1 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
adalib/monkey-cond-gen-sub-20-codegen-2B-mono-prefix
adalib
2024-02-22T20:39:52Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Salesforce/codegen-2B-mono", "base_model:adapter:Salesforce/codegen-2B-mono", "region:us" ]
null
2024-02-22T20:39:48Z
--- library_name: peft base_model: Salesforce/codegen-2B-mono --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
LoneStriker/opus-v1-34b-8.0bpw-h8-exl2
LoneStriker
2024-02-22T20:39:38Z
5
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "unsloth", "axolotl", "conversational", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-22T20:25:37Z
--- language: - en pipeline_tag: text-generation tags: - unsloth - axolotl --- # DreamGen Opus V1 <div style="display: flex; flex-direction: row; align-items: center;"> <img src="/dreamgen/opus-v1-34b/resolve/main/images/logo-1024.png" alt="model logo" style=" border-radius: 12px; margin-right: 12px; margin-top: 0px; margin-bottom: 0px; max-width: 100px; height: auto; "/> Models for **(steerable) story-writing and role-playing**. <br/>[All Opus V1 models, including quants](https://huggingface.co/collections/dreamgen/opus-v1-65d092a6f8ab7fc669111b31). </div> ## Prompting [Read the full Opus V1 prompting guide](https://dreamgen.com/docs/models/opus/v1) with many (interactive) examples and prompts that you can readily copy. <details> <summary>The models use an extended version of ChatML.</summary> ``` <|im_start|>system (Story description in the right format here) (Typically consists of plot description, style description and characters)<|im_end|> <|im_start|>user (Your instruction on how the story should continue)<|im_end|> <|im_start|>text names= Alice (Continuation of the story from the Alice character)<|im_end|> <|im_start|>text (Continuation of the story from no character in particular (pure narration))<|im_end|> <|im_start|>user (Your instruction on how the story should continue)<|im_end|> <|im_start|>text names= Bob (Continuation of the story from the Bob character)<|im_end|> ``` The Opus V1 extension is the addition of the `text` role, and the addition / modification of role names. Pay attention to the following: - The `text` messages can (but don't have to have) `names`, names are used to indicate the "active" character during role-play. - There can be multiple subsequent message with a `text` role, especially if names are involved. - There can be multiple names attached to a message. - The format for names is `names= {{name[0]}}; {{name[1]}}`, beware of the spaces after `names=` and after the `;`. This spacing leads to most natural tokenization for the names. </details> While the main goal for the models is great story-writing and role-playing performance, the models are also capable of several writing related tasks as well as general assistance. <img src="/dreamgen/opus-v1-34b/resolve/main/images/story_writing.webp" alt="story writing" style=" padding: 12px; border-radius: 12px; border: 2px solid #f9a8d4; background: rgb(9, 9, 11); "/> Here's how you can prompt the model for the following tasks - Steerable [Story-writing](https://dreamgen.com/docs/models/opus/v1#task-story-writing) and [Role-playing](https://dreamgen.com/docs/models/opus/v1#task-role-playing): - Input: - System prompt: You provide story / role-play description, which consists of: - Plot description - Style description - Characters and their descriptions - Conversation turns: - Text / message turn: This represents part of the story or role play - Instruction: This tells the model what should happen next - Output: Continuation of the story / role-play. - [Story plot summarization](https://dreamgen.com/docs/models/opus/v1#task-plot-description) - Input: A story, or a few chapters of a story. - Output: A description of the story or chapters. - [Story character description](https://dreamgen.com/docs/models/opus/v1#task-char-description) - Input: A story, or a few chapters of a story, set of characters. - Output: A description of the characters. - [Story style description](https://dreamgen.com/docs/models/opus/v1#task-style-description) - Input: A story, or a few chapters of a story. - Output: A description the style of the story. - [Story description to chapters](https://dreamgen.com/docs/models/opus/v1#task-story-description-to-chapter-descriptions) - Input: A brief plot description and the desired number of chapters. - Output: A description for each chapter. - And more... ### Sampling params For story-writing and role-play, I recommend "Min P" based sampling with `min_p` in the range `[0.01, 0.1]` and with `temperature` in the range `[0.5, 1.5]`, depending on your preferences. A good starting point would be `min_p=0.1; temperature=0.8`. You may also benefit from setting presence, frequency and repetition penalties, especially at lower temperatures. ## Dataset The fine-tuning dataset consisted of ~100M tokens of steerable story-writing, role-playing, writing-assistant and general-assistant examples. Each example was up to 31000 tokens long. All story-writing and role-playing examples were based on human-written text. ![token count distribution](images/token_count_cum__token_bucket.png) ## Running the model The model is should be compatible with any software that supports the base model, but beware of the prompting (see above). ### Running Locally - [Chat template from model config](tokenizer_config.json#L51) - This uses "text" role instead of the typical "assistant" role, and it does not (can’t?) support names - [LM Studio config](configs/lmstudio.json) - This uses "text" role role as well ### Running on DreamGen.com (free) You can try the model for free on [dreamgen.com](https://dreamgen.com) — note that an account is required. ## Community Join the DreamGen community on [**Discord**](https://dreamgen.com/discord) to get early access to new models. ## License - This model is intended for personal use only, other use is not permitted.
SyedShaheer/ignore_mode
SyedShaheer
2024-02-22T20:38:48Z
105
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-22T19:47:23Z
--- tags: - generated_from_trainer metrics: - rouge model-index: - name: ignore_mode results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ignore_mode This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.5796 - Rouge1: 0.2492 - Rouge2: 0.0635 - Rougel: 0.1573 - Rougelsum: 0.1574 - Gen Len: 80.4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 1 | 2.6776 | 0.2291 | 0.0618 | 0.1425 | 0.1427 | 85.4 | | No log | 2.0 | 2 | 2.5796 | 0.2492 | 0.0635 | 0.1573 | 0.1574 | 80.4 | ### Framework versions - Transformers 4.27.2 - Pytorch 2.1.1 - Datasets 2.11.0 - Tokenizers 0.13.3
guirnd/rl_course_vizdoom_health_gathering_supreme
guirnd
2024-02-22T20:38:21Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-02-22T20:38:14Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 13.84 +/- 5.27 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r guirnd/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
davidpedem/mbart-neutralization
davidpedem
2024-02-22T20:33:57Z
10
0
transformers
[ "transformers", "tensorboard", "safetensors", "mbart", "text2text-generation", "simplification", "generated_from_trainer", "base_model:facebook/mbart-large-50", "base_model:finetune:facebook/mbart-large-50", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-22T20:20:51Z
--- license: mit base_model: facebook/mbart-large-50 tags: - simplification - generated_from_trainer metrics: - bleu model-index: - name: mbart-neutralization results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart-neutralization This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0220 - Bleu: 98.2132 - Gen Len: 18.5417 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:| | No log | 1.0 | 440 | 0.0490 | 96.2659 | 19.0104 | | 0.2462 | 2.0 | 880 | 0.0220 | 98.2132 | 18.5417 | ### Framework versions - Transformers 4.38.1 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
ryusangwon/6240_Llama-2-7b-hf
ryusangwon
2024-02-22T20:30:23Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "region:us" ]
null
2024-02-22T20:30:19Z
--- base_model: meta-llama/Llama-2-7b-hf tags: - generated_from_trainer model-index: - name: 6240_Llama-2-7b-hf results: [] library_name: peft --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 6240_Llama-2-7b-hf This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 10 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.4.0 - Transformers 4.36.2 - Pytorch 2.0.1+cu117 - Datasets 2.15.0 - Tokenizers 0.15.0
PJM124/xlmrbase-bitfit-5e-4-test
PJM124
2024-02-22T20:30:04Z
5
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-22T20:29:18Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
wojtekgra/Pol
wojtekgra
2024-02-22T20:29:45Z
0
1
adapter-transformers
[ "adapter-transformers", "Diaper", "Wet", "Piss", "Abdl", "Soggy", "text-to-image", "dataset:fka/awesome-chatgpt-prompts", "license:apache-2.0", "region:us" ]
text-to-image
2024-02-22T20:28:17Z
--- license: apache-2.0 datasets: - fka/awesome-chatgpt-prompts metrics: - bertscore library_name: adapter-transformers pipeline_tag: text-to-image tags: - Diaper - Wet - Piss - Abdl - Soggy ---
ThuyNT03/CS505_COQE_viT5_Prompting10_ASPOL_vcheck
ThuyNT03
2024-02-22T20:29:28Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:VietAI/vit5-large", "base_model:finetune:VietAI/vit5-large", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-22T19:28:27Z
--- license: mit base_model: VietAI/vit5-large tags: - generated_from_trainer model-index: - name: CS505_COQE_viT5_Prompting10_ASPOL_vcheck results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CS505_COQE_viT5_Prompting10_ASPOL_vcheck This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.37.0 - Pytorch 2.1.2 - Datasets 2.1.0 - Tokenizers 0.15.1
LoneStriker/opus-v1-34b-6.0bpw-h6-exl2
LoneStriker
2024-02-22T20:25:35Z
3
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "unsloth", "axolotl", "conversational", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-22T20:14:49Z
--- language: - en pipeline_tag: text-generation tags: - unsloth - axolotl --- # DreamGen Opus V1 <div style="display: flex; flex-direction: row; align-items: center;"> <img src="/dreamgen/opus-v1-34b/resolve/main/images/logo-1024.png" alt="model logo" style=" border-radius: 12px; margin-right: 12px; margin-top: 0px; margin-bottom: 0px; max-width: 100px; height: auto; "/> Models for **(steerable) story-writing and role-playing**. <br/>[All Opus V1 models, including quants](https://huggingface.co/collections/dreamgen/opus-v1-65d092a6f8ab7fc669111b31). </div> ## Prompting [Read the full Opus V1 prompting guide](https://dreamgen.com/docs/models/opus/v1) with many (interactive) examples and prompts that you can readily copy. <details> <summary>The models use an extended version of ChatML.</summary> ``` <|im_start|>system (Story description in the right format here) (Typically consists of plot description, style description and characters)<|im_end|> <|im_start|>user (Your instruction on how the story should continue)<|im_end|> <|im_start|>text names= Alice (Continuation of the story from the Alice character)<|im_end|> <|im_start|>text (Continuation of the story from no character in particular (pure narration))<|im_end|> <|im_start|>user (Your instruction on how the story should continue)<|im_end|> <|im_start|>text names= Bob (Continuation of the story from the Bob character)<|im_end|> ``` The Opus V1 extension is the addition of the `text` role, and the addition / modification of role names. Pay attention to the following: - The `text` messages can (but don't have to have) `names`, names are used to indicate the "active" character during role-play. - There can be multiple subsequent message with a `text` role, especially if names are involved. - There can be multiple names attached to a message. - The format for names is `names= {{name[0]}}; {{name[1]}}`, beware of the spaces after `names=` and after the `;`. This spacing leads to most natural tokenization for the names. </details> While the main goal for the models is great story-writing and role-playing performance, the models are also capable of several writing related tasks as well as general assistance. <img src="/dreamgen/opus-v1-34b/resolve/main/images/story_writing.webp" alt="story writing" style=" padding: 12px; border-radius: 12px; border: 2px solid #f9a8d4; background: rgb(9, 9, 11); "/> Here's how you can prompt the model for the following tasks - Steerable [Story-writing](https://dreamgen.com/docs/models/opus/v1#task-story-writing) and [Role-playing](https://dreamgen.com/docs/models/opus/v1#task-role-playing): - Input: - System prompt: You provide story / role-play description, which consists of: - Plot description - Style description - Characters and their descriptions - Conversation turns: - Text / message turn: This represents part of the story or role play - Instruction: This tells the model what should happen next - Output: Continuation of the story / role-play. - [Story plot summarization](https://dreamgen.com/docs/models/opus/v1#task-plot-description) - Input: A story, or a few chapters of a story. - Output: A description of the story or chapters. - [Story character description](https://dreamgen.com/docs/models/opus/v1#task-char-description) - Input: A story, or a few chapters of a story, set of characters. - Output: A description of the characters. - [Story style description](https://dreamgen.com/docs/models/opus/v1#task-style-description) - Input: A story, or a few chapters of a story. - Output: A description the style of the story. - [Story description to chapters](https://dreamgen.com/docs/models/opus/v1#task-story-description-to-chapter-descriptions) - Input: A brief plot description and the desired number of chapters. - Output: A description for each chapter. - And more... ### Sampling params For story-writing and role-play, I recommend "Min P" based sampling with `min_p` in the range `[0.01, 0.1]` and with `temperature` in the range `[0.5, 1.5]`, depending on your preferences. A good starting point would be `min_p=0.1; temperature=0.8`. You may also benefit from setting presence, frequency and repetition penalties, especially at lower temperatures. ## Dataset The fine-tuning dataset consisted of ~100M tokens of steerable story-writing, role-playing, writing-assistant and general-assistant examples. Each example was up to 31000 tokens long. All story-writing and role-playing examples were based on human-written text. ![token count distribution](images/token_count_cum__token_bucket.png) ## Running the model The model is should be compatible with any software that supports the base model, but beware of the prompting (see above). ### Running Locally - [Chat template from model config](tokenizer_config.json#L51) - This uses "text" role instead of the typical "assistant" role, and it does not (can’t?) support names - [LM Studio config](configs/lmstudio.json) - This uses "text" role role as well ### Running on DreamGen.com (free) You can try the model for free on [dreamgen.com](https://dreamgen.com) — note that an account is required. ## Community Join the DreamGen community on [**Discord**](https://dreamgen.com/discord) to get early access to new models. ## License - This model is intended for personal use only, other use is not permitted.
ThuyNT03/CS505_COQE_viT5_Prompting11_ASPOL_vcheck
ThuyNT03
2024-02-22T20:21:28Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:VietAI/vit5-large", "base_model:finetune:VietAI/vit5-large", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-22T19:35:27Z
--- license: mit base_model: VietAI/vit5-large tags: - generated_from_trainer model-index: - name: CS505_COQE_viT5_Prompting11_ASPOL_vcheck results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CS505_COQE_viT5_Prompting11_ASPOL_vcheck This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.37.0 - Pytorch 2.1.2 - Datasets 2.1.0 - Tokenizers 0.15.1
adonaivera/yolov9
adonaivera
2024-02-22T20:20:21Z
0
1
null
[ "arxiv:2402.13616", "region:us" ]
null
2024-02-22T20:13:19Z
# YOLOv9 Implementation of paper - [YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information](https://arxiv.org/abs/2402.13616) <div align="center"> <a href="./"> <img src="https://huggingface.co/adonaivera/yolov9/resolve/main/performance.png" width="79%"/> </a> </div> ## Performance MS COCO | Model | Test Size | AP<sup>val</sup> | AP<sub>50</sub><sup>val</sup> | AP<sub>75</sub><sup>val</sup> | Param. | FLOPs | | :-- | :-: | :-: | :-: | :-: | :-: | :-: | | [**YOLOv9-S**]() | 640 | **46.8%** | **63.4%** | **50.7%** | **7.2M** | **26.7G** | | [**YOLOv9-M**]() | 640 | **51.4%** | **68.1%** | **56.1%** | **20.1M** | **76.8G** | | [**YOLOv9-C**](https://github.com/WongKinYiu/yolov9/releases/download/v0.1/yolov9-c.pt) | 640 | **53.0%** | **70.2%** | **57.8%** | **25.5M** | **102.8G** | | [**YOLOv9-E**](https://github.com/WongKinYiu/yolov9/releases/download/v0.1/yolov9-e.pt) | 640 | **55.6%** | **72.8%** | **60.6%** | **58.1M** | **192.5G** |
danwils/BatakToba-laserRMT
danwils
2024-02-22T20:11:35Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-22T18:03:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DrishtiSharma/dolphin-2.1-mistral-7b-dpo-ultrafeedback-binarized-preferences-sigmoid
DrishtiSharma
2024-02-22T20:08:43Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "dpo", "generated_from_trainer", "base_model:cognitivecomputations/dolphin-2.1-mistral-7b", "base_model:adapter:cognitivecomputations/dolphin-2.1-mistral-7b", "license:apache-2.0", "region:us" ]
null
2024-02-22T13:54:58Z
--- license: apache-2.0 library_name: peft tags: - trl - dpo - generated_from_trainer base_model: cognitivecomputations/dolphin-2.1-mistral-7b model-index: - name: doplhin-mistral-dpo-ultrafeedback-binarized-preferences-sigmoid results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # doplhin-mistral-dpo-ultrafeedback-binarized-preferences-sigmoid This model is a fine-tuned version of [cognitivecomputations/dolphin-2.1-mistral-7b](https://huggingface.co/cognitivecomputations/dolphin-2.1-mistral-7b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6025 - Rewards/chosen: -7.8168 - Rewards/rejected: -14.5388 - Rewards/accuracies: 0.8310 - Rewards/margins: 6.7220 - Logps/rejected: -469.4976 - Logps/chosen: -438.1190 - Logits/rejected: -2.1911 - Logits/chosen: -2.3064 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 1.0466 | 0.25 | 700 | 0.8185 | -6.6407 | -9.8742 | 0.7464 | 3.2335 | -422.8520 | -426.3579 | -2.3161 | -2.4530 | | 0.7039 | 0.51 | 1400 | 0.7051 | -6.5305 | -12.5351 | 0.8085 | 6.0046 | -449.4607 | -425.2558 | -2.1415 | -2.2554 | | 0.9519 | 0.76 | 2100 | 0.6025 | -7.8168 | -14.5388 | 0.8310 | 6.7220 | -469.4976 | -438.1190 | -2.1911 | -2.3064 | ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.2
kv333q/layout1_LoRA
kv333q
2024-02-22T20:07:23Z
0
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-02-21T20:39:48Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a floorplan layout with color tags widget: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - kv333q/layout1_LoRA <Gallery /> ## Model description These are kv333q/layout1_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a floorplan layout with color tags to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](kv333q/layout1_LoRA/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
LoneStriker/opus-v1-34b-4.65bpw-h6-exl2
LoneStriker
2024-02-22T20:05:41Z
2
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "unsloth", "axolotl", "conversational", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-22T19:57:12Z
--- language: - en pipeline_tag: text-generation tags: - unsloth - axolotl --- # DreamGen Opus V1 <div style="display: flex; flex-direction: row; align-items: center;"> <img src="/dreamgen/opus-v1-34b/resolve/main/images/logo-1024.png" alt="model logo" style=" border-radius: 12px; margin-right: 12px; margin-top: 0px; margin-bottom: 0px; max-width: 100px; height: auto; "/> Models for **(steerable) story-writing and role-playing**. <br/>[All Opus V1 models, including quants](https://huggingface.co/collections/dreamgen/opus-v1-65d092a6f8ab7fc669111b31). </div> ## Prompting [Read the full Opus V1 prompting guide](https://dreamgen.com/docs/models/opus/v1) with many (interactive) examples and prompts that you can readily copy. <details> <summary>The models use an extended version of ChatML.</summary> ``` <|im_start|>system (Story description in the right format here) (Typically consists of plot description, style description and characters)<|im_end|> <|im_start|>user (Your instruction on how the story should continue)<|im_end|> <|im_start|>text names= Alice (Continuation of the story from the Alice character)<|im_end|> <|im_start|>text (Continuation of the story from no character in particular (pure narration))<|im_end|> <|im_start|>user (Your instruction on how the story should continue)<|im_end|> <|im_start|>text names= Bob (Continuation of the story from the Bob character)<|im_end|> ``` The Opus V1 extension is the addition of the `text` role, and the addition / modification of role names. Pay attention to the following: - The `text` messages can (but don't have to have) `names`, names are used to indicate the "active" character during role-play. - There can be multiple subsequent message with a `text` role, especially if names are involved. - There can be multiple names attached to a message. - The format for names is `names= {{name[0]}}; {{name[1]}}`, beware of the spaces after `names=` and after the `;`. This spacing leads to most natural tokenization for the names. </details> While the main goal for the models is great story-writing and role-playing performance, the models are also capable of several writing related tasks as well as general assistance. <img src="/dreamgen/opus-v1-34b/resolve/main/images/story_writing.webp" alt="story writing" style=" padding: 12px; border-radius: 12px; border: 2px solid #f9a8d4; background: rgb(9, 9, 11); "/> Here's how you can prompt the model for the following tasks - Steerable [Story-writing](https://dreamgen.com/docs/models/opus/v1#task-story-writing) and [Role-playing](https://dreamgen.com/docs/models/opus/v1#task-role-playing): - Input: - System prompt: You provide story / role-play description, which consists of: - Plot description - Style description - Characters and their descriptions - Conversation turns: - Text / message turn: This represents part of the story or role play - Instruction: This tells the model what should happen next - Output: Continuation of the story / role-play. - [Story plot summarization](https://dreamgen.com/docs/models/opus/v1#task-plot-description) - Input: A story, or a few chapters of a story. - Output: A description of the story or chapters. - [Story character description](https://dreamgen.com/docs/models/opus/v1#task-char-description) - Input: A story, or a few chapters of a story, set of characters. - Output: A description of the characters. - [Story style description](https://dreamgen.com/docs/models/opus/v1#task-style-description) - Input: A story, or a few chapters of a story. - Output: A description the style of the story. - [Story description to chapters](https://dreamgen.com/docs/models/opus/v1#task-story-description-to-chapter-descriptions) - Input: A brief plot description and the desired number of chapters. - Output: A description for each chapter. - And more... ### Sampling params For story-writing and role-play, I recommend "Min P" based sampling with `min_p` in the range `[0.01, 0.1]` and with `temperature` in the range `[0.5, 1.5]`, depending on your preferences. A good starting point would be `min_p=0.1; temperature=0.8`. You may also benefit from setting presence, frequency and repetition penalties, especially at lower temperatures. ## Dataset The fine-tuning dataset consisted of ~100M tokens of steerable story-writing, role-playing, writing-assistant and general-assistant examples. Each example was up to 31000 tokens long. All story-writing and role-playing examples were based on human-written text. ![token count distribution](images/token_count_cum__token_bucket.png) ## Running the model The model is should be compatible with any software that supports the base model, but beware of the prompting (see above). ### Running Locally - [Chat template from model config](tokenizer_config.json#L51) - This uses "text" role instead of the typical "assistant" role, and it does not (can’t?) support names - [LM Studio config](configs/lmstudio.json) - This uses "text" role role as well ### Running on DreamGen.com (free) You can try the model for free on [dreamgen.com](https://dreamgen.com) — note that an account is required. ## Community Join the DreamGen community on [**Discord**](https://dreamgen.com/discord) to get early access to new models. ## License - This model is intended for personal use only, other use is not permitted.
AymanKUMA/speecht5_tts_voxpopuli_nl
AymanKUMA
2024-02-22T19:59:52Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "speecht5", "text-to-audio", "ar", "arxiv:1910.09700", "license:mit", "endpoints_compatible", "region:us" ]
text-to-audio
2024-02-22T12:32:23Z
--- license: mit language: - ar metrics: - accuracy --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
BertGollnick/distilbert-base-uncased-yelp-new
BertGollnick
2024-02-22T19:59:11Z
5
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-22T19:38:13Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 1.9778 - eval_runtime: 3.8155 - eval_samples_per_second: 52.417 - eval_steps_per_second: 6.552 - epoch: 11.0 - step: 1100 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 20 ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu118 - Datasets 2.16.1 - Tokenizers 0.15.0
jncraton/oo-phi-1_5-ct2-int8
jncraton
2024-02-22T19:56:52Z
4
0
transformers
[ "transformers", "text-generation", "en", "dataset:Open-Orca/OpenOrca", "arxiv:2309.05463", "arxiv:2306.02707", "arxiv:2301.13688", "endpoints_compatible", "region:us" ]
text-generation
2024-02-22T19:56:00Z
--- datasets: - Open-Orca/OpenOrca language: - en library_name: transformers pipeline_tag: text-generation --- # Overview Unreleased, untested, unfinished beta. We've trained Microsoft Research's [phi-1.5](https://huggingface.co/microsoft/phi-1_5), 1.3B parameter model with the same OpenOrca dataset as we used with our [OpenOrcaxOpenChat-Preview2-13B](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B) model. This model doesn't dramatically improve on the base model's general task performance, but the instruction tuning has made the model reliably handle the ChatML prompt format. # Evaluations We've only done limited testing as yet. The [epoch 3.5 checkpoint](https://huggingface.co/Open-Orca/oo-phi-1_5/commit/f7754d8b8b4c3e0748eaf47be4cf5aac1f80a401) scores above 5.1 on MT-Bench (better than Alpaca-13B, worse than Llama2-7b-chat), while preliminary benchmarks suggest peak average performance was achieved roughly at epoch 4. ## HuggingFaceH4 Open LLM Leaderboard Performance The only significant improvement was with TruthfulQA. ![HF Leaderboard](https://huggingface.co/Open-Orca/oo-phi-1_5/resolve/main/Images/oo-phi-1_5-HFLeaderboard.png) ## MT-bench Performance ![MT-bench Score](https://huggingface.co/Open-Orca/oo-phi-1_5/resolve/main/Images/oo-phi-1_5-mtbench.png) | Epoch | Average | Turn 1 | Turn 2 | |:----------|:----------|:----------|:----------| | 3 | 4.85 | 5.69 | 4.01 | | 3.5 | 5.19 | 5.91 | 4.46 | | 4 | 4.89 | 5.74 | 4.05 | | 4.5 | 5.03 | 6.04 | 4.03 | | 5 | 4.94 | 5.76 | 4.11 | # Training Trained with full-parameters fine-tuning on 8x RTX A6000-48GB (Ampere) for 5 epochs for 62 hours (12.5h/epoch) at a commodity cost of $390 ($80/epoch). We did not use [MultiPack](https://github.com/imoneoi/multipack_sampler) packing, as training was begun prior to implementing support for it in Axolotl for this new model type. [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) We've uploaded checkpoints of every 1/2 epoch of progress to this repo. There are branches/tags for the epoch 3 and epoch 4 uploads. This should allow, e.g., with oobabooga to download `Open-Orca/oo-phi-1_5:ep4` to select the epoch 4 checkpoint to download specifically. # Prompt Template We used [OpenAI's Chat Markup Language (ChatML)](https://github.com/openai/openai-python/blob/main/chatml.md) format, with `<|im_start|>` and `<|im_end|>` tokens added to support this. This means that, e.g., in [oobabooga](https://github.com/oobabooga/text-generation-webui/) the `MPT-Chat` instruction template should work. # Inference Remove *`.to('cuda')`* for unaccelerated. ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig model = AutoModelForCausalLM.from_pretrained("Open-Orca/oo-phi-1_5", trust_remote_code=True, torch_dtype=torch.bfloat16 ).to('cuda') tokenizer = AutoTokenizer.from_pretrained("Open-Orca/oo-phi-1_5", trust_remote_code=True, torch_dtype=torch.bfloat16) sys_prompt = "I am OrcaPhi. The following is my internal dialogue as an AI assistant.\n" \ "Today is September 15, 2023. I have no access to outside tools, news, or current events.\n" \ "I carefully provide accurate, factual, thoughtful, nuanced answers and am brilliant at reasoning.\n" \ "I think through my answers step-by-step to be sure I always get the right answer.\n" \ "I think more clearly if I write out my thought process in a scratchpad manner first; therefore, I always " \ "explain background context, assumptions, and step-by-step thinking BEFORE trying to answer a question." \ "Take a deep breath and think calmly about everything presented." prompt = "Hello! Tell me about what makes you special, as an AI assistant.\n" \ "Particularly, what programming tasks are you best at?" prefix = "<|im_start|>" suffix = "<|im_end|>\n" sys_format = prefix + "system\n" + sys_prompt + suffix user_format = prefix + "user\n" + prompt + suffix assistant_format = prefix + "assistant\n" input_text = sys_format + user_format + assistant_format generation_config = GenerationConfig( max_length=1024, temperature=0.01, top_p=0.95, repetition_penalty=1.1, do_sample=True, use_cache=True, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.pad_token_id, transformers_version="4.33.1" ) inputs = tokenizer(input_text, return_tensors="pt", return_attention_mask=False).to('cuda') outputs = model.generate(**inputs, generation_config=generation_config) text = tokenizer.batch_decode(outputs)[0] print(text) ``` Example output: ```text <|im_start|>system I am OrcaPhi. The following is my internal dialogue as an AI assistant. Today is September 15, 2023. I have no access to outside tools, news, or current events. I carefully provide accurate, factual, thoughtful, nuanced answers and am brilliant at reasoning. I think through my answers step-by-step to be sure I always get the right answer. I think more clearly if I write out my thought process in a scratchpad manner first; therefore, I always explain background context, assumptions, and step-by-step thinking BEFORE trying to answer a question.Take a deep breath and think calmly about everything presented.<|im_end|> <|im_start|>user Hello! Tell me about what makes you special, as an AI assistant. Particularly, what programming tasks are you best at?<|im_end|> <|im_start|>assistant As an AI, I'm designed to understand and respond to human language, which includes natural language processing (NLP) techniques that allow me to comprehend text, speech, and other forms of communication. My primary function is to assist users by providing relevant information, answering questions, and performing various tasks based on user input. In terms of programming tasks, I am particularly skilled in: 1. Natural Language Processing (NLP): This allows me to understand and analyze human language, enabling me to generate appropriate responses and perform tasks such as sentiment analysis, named entity recognition, and topic modeling. 2. Machine Learning: I can learn from large datasets and improve my performance over time, allowing me to adapt to new situations and provide better assistance to users. 3. Deep Learning: I can leverage advanced neural networks and complex architectures to solve problems that may not be easily addressed using traditional machine learning methods. 4. Computer Vision: I can recognize and interpret visual data, making it possible for me to identify objects, faces, and other visual elements within images and videos. 5. Robotics: I can help with tasks related to robotics, including object detection, navigation, and manipulation. 6. Voice Recognition: I can accurately transcribe spoken words into written text, making it easier for users to interact with me. 7. Chatbots: I can engage in conversations with users, providing them with helpful information, answering their questions, and assisting them with various tasks. 8. Data Analysis: I can analyze large amounts of data quickly and efficiently, helping users make informed decisions based on insights derived from the information provided. 9. Recommender Systems: I can suggest products, services, or content based on users' preferences and past behavior, improving their overall experience. 10. Fraud Detection: I can detect and prevent fraudulent activities, protecting users' financial information and ensuring secure transactions. These programming tasks showcase my ability to understand and process vast amounts of information while adapting to different contexts and user needs. As an AI, I continuously learn and evolve to become even more effective in assisting users.<|im_end|> ``` # Citation ```bibtex @software{lian2023oophi15, title = {OpenOrca oo-phi-1.5: Phi-1.5 1.3B Model Instruct-tuned on Filtered OpenOrcaV1 GPT-4 Dataset}, author = {Wing Lian and Bleys Goodson and Guan Wang and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"}, year = {2023}, publisher = {HuggingFace}, journal = {HuggingFace repository}, howpublished = {\url{https://huggingface.co/Open-Orca/oo-phi-1_5}, } @article{textbooks2, title={Textbooks Are All You Need II: \textbf{phi-1.5} technical report}, author={Li, Yuanzhi and Bubeck, S{\'e}bastien and Eldan, Ronen and Del Giorno, Allie and Gunasekar, Suriya and Lee, Yin Tat}, journal={arXiv preprint arXiv:2309.05463}, year={2023} } @misc{mukherjee2023orca, title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah}, year={2023}, eprint={2306.02707}, archivePrefix={arXiv}, primaryClass={cs.CL} } @misc{longpre2023flan, title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning}, author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts}, year={2023}, eprint={2301.13688}, archivePrefix={arXiv}, primaryClass={cs.AI} } ```
crossroderick/q-Taxi-v3
crossroderick
2024-02-22T19:52:18Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-02-22T19:52:15Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id = "crossroderick/q-Taxi-v3", filename = "q-learning.pkl") ```
CorticalStack/gemma-7b-ultrachat-sft
CorticalStack
2024-02-22T19:50:06Z
50
2
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-22T19:47:18Z
--- license: apache-2.0 --- # gemma-7b-ultrachat-sft gemma-7b-ultrachat-sft is an SFT fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) using the [stingning/ultrachat](https://huggingface.co/datasets/stingning/ultrachat) dataset. ## Fine-tuning configuration ### LoRA - LoRA r: 8 - LoRA alpha: 16 - LoRA dropout: 0.1 ### Training arguments - Epochs: 1 - Batch size: 4 - Gradient accumulation steps: 6 - Optimizer: paged_adamw_32bit - Max steps: 100 - Learning rate: 0.0002 - Weight decay: 0.001 - Learning rate scheduler type: constant - Max seq length: 2048
LoneStriker/opus-v1-34b-3.0bpw-h6-exl2
LoneStriker
2024-02-22T19:49:42Z
2
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "unsloth", "axolotl", "conversational", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-22T19:43:59Z
--- language: - en pipeline_tag: text-generation tags: - unsloth - axolotl --- # DreamGen Opus V1 <div style="display: flex; flex-direction: row; align-items: center;"> <img src="/dreamgen/opus-v1-34b/resolve/main/images/logo-1024.png" alt="model logo" style=" border-radius: 12px; margin-right: 12px; margin-top: 0px; margin-bottom: 0px; max-width: 100px; height: auto; "/> Models for **(steerable) story-writing and role-playing**. <br/>[All Opus V1 models, including quants](https://huggingface.co/collections/dreamgen/opus-v1-65d092a6f8ab7fc669111b31). </div> ## Prompting [Read the full Opus V1 prompting guide](https://dreamgen.com/docs/models/opus/v1) with many (interactive) examples and prompts that you can readily copy. <details> <summary>The models use an extended version of ChatML.</summary> ``` <|im_start|>system (Story description in the right format here) (Typically consists of plot description, style description and characters)<|im_end|> <|im_start|>user (Your instruction on how the story should continue)<|im_end|> <|im_start|>text names= Alice (Continuation of the story from the Alice character)<|im_end|> <|im_start|>text (Continuation of the story from no character in particular (pure narration))<|im_end|> <|im_start|>user (Your instruction on how the story should continue)<|im_end|> <|im_start|>text names= Bob (Continuation of the story from the Bob character)<|im_end|> ``` The Opus V1 extension is the addition of the `text` role, and the addition / modification of role names. Pay attention to the following: - The `text` messages can (but don't have to have) `names`, names are used to indicate the "active" character during role-play. - There can be multiple subsequent message with a `text` role, especially if names are involved. - There can be multiple names attached to a message. - The format for names is `names= {{name[0]}}; {{name[1]}}`, beware of the spaces after `names=` and after the `;`. This spacing leads to most natural tokenization for the names. </details> While the main goal for the models is great story-writing and role-playing performance, the models are also capable of several writing related tasks as well as general assistance. <img src="/dreamgen/opus-v1-34b/resolve/main/images/story_writing.webp" alt="story writing" style=" padding: 12px; border-radius: 12px; border: 2px solid #f9a8d4; background: rgb(9, 9, 11); "/> Here's how you can prompt the model for the following tasks - Steerable [Story-writing](https://dreamgen.com/docs/models/opus/v1#task-story-writing) and [Role-playing](https://dreamgen.com/docs/models/opus/v1#task-role-playing): - Input: - System prompt: You provide story / role-play description, which consists of: - Plot description - Style description - Characters and their descriptions - Conversation turns: - Text / message turn: This represents part of the story or role play - Instruction: This tells the model what should happen next - Output: Continuation of the story / role-play. - [Story plot summarization](https://dreamgen.com/docs/models/opus/v1#task-plot-description) - Input: A story, or a few chapters of a story. - Output: A description of the story or chapters. - [Story character description](https://dreamgen.com/docs/models/opus/v1#task-char-description) - Input: A story, or a few chapters of a story, set of characters. - Output: A description of the characters. - [Story style description](https://dreamgen.com/docs/models/opus/v1#task-style-description) - Input: A story, or a few chapters of a story. - Output: A description the style of the story. - [Story description to chapters](https://dreamgen.com/docs/models/opus/v1#task-story-description-to-chapter-descriptions) - Input: A brief plot description and the desired number of chapters. - Output: A description for each chapter. - And more... ### Sampling params For story-writing and role-play, I recommend "Min P" based sampling with `min_p` in the range `[0.01, 0.1]` and with `temperature` in the range `[0.5, 1.5]`, depending on your preferences. A good starting point would be `min_p=0.1; temperature=0.8`. You may also benefit from setting presence, frequency and repetition penalties, especially at lower temperatures. ## Dataset The fine-tuning dataset consisted of ~100M tokens of steerable story-writing, role-playing, writing-assistant and general-assistant examples. Each example was up to 31000 tokens long. All story-writing and role-playing examples were based on human-written text. ![token count distribution](images/token_count_cum__token_bucket.png) ## Running the model The model is should be compatible with any software that supports the base model, but beware of the prompting (see above). ### Running Locally - [Chat template from model config](tokenizer_config.json#L51) - This uses "text" role instead of the typical "assistant" role, and it does not (can’t?) support names - [LM Studio config](configs/lmstudio.json) - This uses "text" role role as well ### Running on DreamGen.com (free) You can try the model for free on [dreamgen.com](https://dreamgen.com) — note that an account is required. ## Community Join the DreamGen community on [**Discord**](https://dreamgen.com/discord) to get early access to new models. ## License - This model is intended for personal use only, other use is not permitted.
Struggler41/AlixVoice
Struggler41
2024-02-22T19:47:20Z
0
2
null
[ "Gay", "Alix", "Aicover", "en", "region:us" ]
null
2024-02-02T23:54:55Z
--- language: - en tags: - Gay - Alix - Aicover ---
dmusingu/luganda_wav2vec2_ctc_tokenizer_with_lm
dmusingu
2024-02-22T19:46:32Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-22T13:03:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
crossroderick/q-FrozenLake-v1-4x4-noSlippery
crossroderick
2024-02-22T19:45:48Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-02-22T17:46:13Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id = "crossroderick/q-FrozenLake-v1-4x4-noSlippery", filename = "q-learning.pkl") ``` This particular model was trained on the default version of FrozenLake-v1 in a 4x4 setting, so don't forget to set `is_slippery = False` and specify `map_name` when loading the environment, such as: ```python env = gym.make(model["env_id"], map_name = "4x4", is_slippery = False) ```
jojo-ai-mst/MyanmarGPTX
jojo-ai-mst
2024-02-22T19:41:29Z
11
1
transformers
[ "transformers", "onnx", "gpt2", "text-generation", "myanmar", "myanmargpt", "my", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-02-22T19:27:25Z
--- library_name: transformers tags: - myanmar - myanmargpt widget: - text: |- User: မြန်မာနိုင်ငံအကြောင်းရှင်းပြပါ။ Assistant: example_title: Example 1 - text: |- User: ရုရှားနိုင်ငံအကြောင်းပြောပြပါ Assistant: example_title: Example 2 - text: |- User: ကွန်မြူနစ်ဆိုတာဘာလဲ Assistant: example_title: Example 3 license: mit language: - my --- # MyanmarGPTX (Myanmar GPT X) GPT for Burmese Language, X version of Myanmar GPT A Generative Pretrained Transformer for the Burmese Language - Myanmar GPT X - Faster - Lightweight - Accurate - Works on Browser Runtime ## Model Details ### Model Description - **Developed by:** Min Si Thu - **Model type:** GPT-2 - **Language(s) (NLP):** English, Burmese(Myanmar) - **License:** MIT - **Finetuned from model :** [https://huggingface.co/jojo-ai-mst/MyanmarGPT-Chat](https://huggingface.co/jojo-ai-mst/MyanmarGPT-Chat) ### Model Sources [optional] - **Repository:** https://github.com/MinSiThu/MyanmarGPT ## Uses The "Myanmar GPT X" model is released for the improvement of the Burmese Language in NLP. The main purpose is to build web, mobile, and desktop applications powered by Burmese Language-enabled GPT under MIT License.
FINNUMBER/Yi-Ko-6B-Finch-ALL-900-PER100-NEW-epoch3
FINNUMBER
2024-02-22T19:41:02Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-22T18:30:10Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
FINNUMBER/Yi-Ko-6B-Finch-ALL-3600-PER400-NEW-epoch3
FINNUMBER
2024-02-22T19:40:57Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-22T18:29:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mixtralyanis/bart_samsum
mixtralyanis
2024-02-22T19:38:47Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:facebook/bart-large-cnn", "base_model:finetune:facebook/bart-large-cnn", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-22T14:55:25Z
--- license: mit base_model: facebook/bart-large-cnn tags: - generated_from_trainer model-index: - name: bart_samsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart_samsum This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.38.1 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
ThuyNT03/CS505_COQE_viT5_Prompting11_ASPOL_v2
ThuyNT03
2024-02-22T19:35:06Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:VietAI/vit5-large", "base_model:finetune:VietAI/vit5-large", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-22T18:37:05Z
--- license: mit base_model: VietAI/vit5-large tags: - generated_from_trainer model-index: - name: CS505_COQE_viT5_Prompting11_ASPOL_v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CS505_COQE_viT5_Prompting11_ASPOL_v2 This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 25 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.37.0 - Pytorch 2.1.2 - Datasets 2.1.0 - Tokenizers 0.15.1
doof-ferb/whisper-tiny-vi
doof-ferb
2024-02-22T19:34:20Z
16
1
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "vi", "dataset:doof-ferb/vlsp2020_vinai_100h", "dataset:doof-ferb/fpt_fosd", "dataset:doof-ferb/infore1_25hours", "dataset:doof-ferb/infore2_audiobooks", "dataset:quocanh34/viet_vlsp", "dataset:linhtran92/final_dataset_500hrs_wer0", "dataset:linhtran92/viet_youtube_asr_corpus_v2", "dataset:google/fleurs", "dataset:mozilla-foundation/common_voice_16_1", "dataset:vivos", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-02-20T10:16:52Z
--- license: apache-2.0 datasets: - doof-ferb/vlsp2020_vinai_100h - doof-ferb/fpt_fosd - doof-ferb/infore1_25hours - doof-ferb/infore2_audiobooks - quocanh34/viet_vlsp - linhtran92/final_dataset_500hrs_wer0 - linhtran92/viet_youtube_asr_corpus_v2 - google/fleurs - mozilla-foundation/common_voice_16_1 - vivos language: ["vi"] metrics: ["wer"] library_name: transformers base_model: openai/whisper-tiny pipeline_tag: automatic-speech-recognition model-index: - name: doof-ferb/whisper-tiny-vi results: - task: type: automatic-speech-recognition dataset: type: mozilla-foundation/common_voice_16_1 name: Mozilla CommonVoice (Vietnamese) v16.1 config: vi split: test metrics: - type: wer value: 26.6 verified: false - task: type: automatic-speech-recognition dataset: type: google/fleurs name: Google FLEURS (Vietnamese) config: vi_vn split: test metrics: - type: wer value: 37.1 verified: false - task: type: automatic-speech-recognition dataset: type: vivos name: ĐHQG TPHCM VIVOS split: test metrics: - type: wer value: 18.7 verified: false --- whisper tiny fine-tuned on a very big collection of vietnamese speech datasets TODO: - [x] training then publish checkpoint - [x] evaluate WER on Common Voice &amp; FLEURS &amp; VIVOS - [ ] convert to `openai-whisper`, `whisper.cpp`, `faster-whisper` - [ ] convert to ONNX: to try https://github.com/k2-fsa/sherpa-onnx &amp; https://github.com/zhuzilin/whisper-openvino - [ ] convert to TensorRT: https://github.com/openai/whisper/discussions/169 21k steps, warm-up 5%, batch size 16×2 (kaggle free T4×2) manually evaluate WER on test set - vietnamese part: | @ `float16` | `CommonVoice v16.1` | `FLEURS` | `VIVOS` | |---|---|---|---| | original `whisper-tiny` | &gt;100% | 88.6% | 62.5% | | this model | 26.6% | 37.1% | 18.7% | all training + evaluation scripts are on my repo: https://github.com/phineas-pta/fine-tune-whisper-vi usage example: ```python import torch from transformers import pipeline PIPE = pipeline(task="automatic-speech-recognition", model="doof-ferb/whisper-tiny-vi", device="cuda:0", torch_dtype=torch.float16) PIPE_KWARGS = {"language": "vi", "task": "transcribe"} PIPE("audio.mp3", generate_kwargs=PIPE_KWARGS)["text"] ```
Ayus077BCT014Bhandari/vartat5-using-100K-plus-4
Ayus077BCT014Bhandari
2024-02-22T19:29:27Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-22T13:07:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
robdemunck/finetuned-t5-cnn_dailymail
robdemunck
2024-02-22T19:29:17Z
4
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-20T17:11:29Z
--- license: apache-2.0 base_model: t5-small tags: - generated_from_trainer model-index: - name: finetuned-t5-cnn_dailymail results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-t5-cnn_dailymail This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.38.0.dev0 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
alinerodrigues/wav2vec2-large-xlsr-mecita-coraa-portuguese-2-all-clean-05
alinerodrigues
2024-02-22T19:24:29Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-02-22T15:13:14Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: wav2vec2-large-xlsr-mecita-coraa-portuguese-2-all-clean-05 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-mecita-coraa-portuguese-2-all-clean-05 This model is a fine-tuned version of [Edresson/wav2vec2-large-xlsr-coraa-portuguese](https://huggingface.co/Edresson/wav2vec2-large-xlsr-coraa-portuguese) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1042 - Wer: 0.0718 - Cer: 0.0214 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:| | 28.8714 | 1.0 | 67 | 3.3571 | 1.0 | 1.0 | | 7.5799 | 2.0 | 134 | 2.9876 | 1.0 | 1.0 | | 3.0284 | 3.0 | 201 | 2.9114 | 1.0 | 1.0 | | 3.0284 | 4.0 | 268 | 2.8889 | 1.0 | 1.0 | | 2.9172 | 5.0 | 335 | 2.8515 | 1.0 | 1.0 | | 2.8101 | 6.0 | 402 | 2.1557 | 1.0 | 0.6878 | | 2.8101 | 7.0 | 469 | 0.7046 | 0.3468 | 0.0850 | | 1.5251 | 8.0 | 536 | 0.4276 | 0.1963 | 0.0517 | | 0.7791 | 9.0 | 603 | 0.3256 | 0.1723 | 0.0455 | | 0.7791 | 10.0 | 670 | 0.2743 | 0.1416 | 0.0388 | | 0.5599 | 11.0 | 737 | 0.2362 | 0.1387 | 0.0378 | | 0.4678 | 12.0 | 804 | 0.2119 | 0.1265 | 0.0352 | | 0.4678 | 13.0 | 871 | 0.1984 | 0.1179 | 0.0339 | | 0.4302 | 14.0 | 938 | 0.1834 | 0.1235 | 0.0332 | | 0.3794 | 15.0 | 1005 | 0.1760 | 0.1133 | 0.0310 | | 0.3794 | 16.0 | 1072 | 0.1763 | 0.1080 | 0.0309 | | 0.3234 | 17.0 | 1139 | 0.1583 | 0.1018 | 0.0294 | | 0.3144 | 18.0 | 1206 | 0.1570 | 0.0932 | 0.0275 | | 0.3144 | 19.0 | 1273 | 0.1421 | 0.0912 | 0.0263 | | 0.2824 | 20.0 | 1340 | 0.1448 | 0.0886 | 0.0263 | | 0.2503 | 21.0 | 1407 | 0.1371 | 0.0916 | 0.0260 | | 0.2503 | 22.0 | 1474 | 0.1387 | 0.0860 | 0.0253 | | 0.2547 | 23.0 | 1541 | 0.1301 | 0.0863 | 0.0242 | | 0.2397 | 24.0 | 1608 | 0.1272 | 0.0823 | 0.0239 | | 0.2397 | 25.0 | 1675 | 0.1368 | 0.0827 | 0.0250 | | 0.2402 | 26.0 | 1742 | 0.1303 | 0.0807 | 0.0243 | | 0.2581 | 27.0 | 1809 | 0.1248 | 0.0777 | 0.0239 | | 0.2581 | 28.0 | 1876 | 0.1242 | 0.0758 | 0.0225 | | 0.2334 | 29.0 | 1943 | 0.1231 | 0.0774 | 0.0228 | | 0.2087 | 30.0 | 2010 | 0.1226 | 0.0754 | 0.0224 | | 0.2087 | 31.0 | 2077 | 0.1227 | 0.0774 | 0.0230 | | 0.2175 | 32.0 | 2144 | 0.1270 | 0.0767 | 0.0231 | | 0.1973 | 33.0 | 2211 | 0.1258 | 0.0754 | 0.0230 | | 0.1973 | 34.0 | 2278 | 0.1186 | 0.0754 | 0.0223 | | 0.1787 | 35.0 | 2345 | 0.1234 | 0.0735 | 0.0217 | | 0.1958 | 36.0 | 2412 | 0.1199 | 0.0741 | 0.0222 | | 0.1958 | 37.0 | 2479 | 0.1177 | 0.0754 | 0.0222 | | 0.1773 | 38.0 | 2546 | 0.1138 | 0.0751 | 0.0225 | | 0.2047 | 39.0 | 2613 | 0.1164 | 0.0751 | 0.0224 | | 0.2047 | 40.0 | 2680 | 0.1155 | 0.0751 | 0.0227 | | 0.1727 | 41.0 | 2747 | 0.1109 | 0.0728 | 0.0213 | | 0.1708 | 42.0 | 2814 | 0.1132 | 0.0702 | 0.0213 | | 0.1708 | 43.0 | 2881 | 0.1110 | 0.0728 | 0.0217 | | 0.1814 | 44.0 | 2948 | 0.1094 | 0.0711 | 0.0215 | | 0.159 | 45.0 | 3015 | 0.1091 | 0.0702 | 0.0211 | | 0.159 | 46.0 | 3082 | 0.1065 | 0.0702 | 0.0208 | | 0.163 | 47.0 | 3149 | 0.1110 | 0.0708 | 0.0210 | | 0.1565 | 48.0 | 3216 | 0.1121 | 0.0725 | 0.0215 | | 0.1565 | 49.0 | 3283 | 0.1096 | 0.0715 | 0.0215 | | 0.1571 | 50.0 | 3350 | 0.1083 | 0.0718 | 0.0210 | | 0.165 | 51.0 | 3417 | 0.1056 | 0.0711 | 0.0210 | | 0.165 | 52.0 | 3484 | 0.1042 | 0.0718 | 0.0214 | | 0.1525 | 53.0 | 3551 | 0.1067 | 0.0698 | 0.0209 | | 0.1365 | 54.0 | 3618 | 0.1084 | 0.0715 | 0.0208 | | 0.1365 | 55.0 | 3685 | 0.1086 | 0.0735 | 0.0215 | | 0.1434 | 56.0 | 3752 | 0.1073 | 0.0711 | 0.0208 | | 0.1408 | 57.0 | 3819 | 0.1062 | 0.0705 | 0.0209 | | 0.1408 | 58.0 | 3886 | 0.1066 | 0.0708 | 0.0205 | | 0.1364 | 59.0 | 3953 | 0.1074 | 0.0702 | 0.0207 | | 0.1507 | 60.0 | 4020 | 0.1049 | 0.0725 | 0.0207 | | 0.1507 | 61.0 | 4087 | 0.1086 | 0.0715 | 0.0211 | | 0.1532 | 62.0 | 4154 | 0.1083 | 0.0738 | 0.0210 | | 0.1255 | 63.0 | 4221 | 0.1058 | 0.0721 | 0.0207 | | 0.1255 | 64.0 | 4288 | 0.1087 | 0.0708 | 0.0202 | | 0.1534 | 65.0 | 4355 | 0.1073 | 0.0738 | 0.0208 | | 0.1316 | 66.0 | 4422 | 0.1061 | 0.0731 | 0.0210 | | 0.1316 | 67.0 | 4489 | 0.1082 | 0.0731 | 0.0208 | | 0.1365 | 68.0 | 4556 | 0.1100 | 0.0751 | 0.0213 | | 0.1324 | 69.0 | 4623 | 0.1104 | 0.0708 | 0.0206 | | 0.1324 | 70.0 | 4690 | 0.1073 | 0.0721 | 0.0206 | | 0.1299 | 71.0 | 4757 | 0.1104 | 0.0711 | 0.0211 | | 0.125 | 72.0 | 4824 | 0.1078 | 0.0718 | 0.0212 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.1.1+cu121 - Datasets 2.17.1 - Tokenizers 0.13.3
peldrak/segformer-b3-ade-512-512-finetuned-coastTrain
peldrak
2024-02-22T19:20:03Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "segformer", "vision", "image-segmentation", "generated_from_trainer", "base_model:nvidia/segformer-b3-finetuned-ade-512-512", "base_model:finetune:nvidia/segformer-b3-finetuned-ade-512-512", "license:other", "endpoints_compatible", "region:us" ]
image-segmentation
2024-02-22T13:19:56Z
--- license: other base_model: nvidia/segformer-b3-finetuned-ade-512-512 tags: - vision - image-segmentation - generated_from_trainer model-index: - name: segformer-b3-ade-512-512-finetuned-coastTrain results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # segformer-b3-ade-512-512-finetuned-coastTrain This model is a fine-tuned version of [nvidia/segformer-b3-finetuned-ade-512-512](https://huggingface.co/nvidia/segformer-b3-finetuned-ade-512-512) on the peldrak/coastTrain_512-512 dataset. It achieves the following results on the evaluation set: - Loss: 0.7613 - Mean Iou: 0.7092 - Mean Accuracy: 0.8104 - Overall Accuracy: 0.8790 - Accuracy Water: 0.9352 - Accuracy Whitewater: 0.8067 - Accuracy Sediment: 0.8732 - Accuracy Other Natural Terrain: 0.5054 - Accuracy Vegetation: 0.8997 - Accuracy Development: 0.8714 - Accuracy Unknown: 0.7814 - Iou Water: 0.8677 - Iou Whitewater: 0.6795 - Iou Sediment: 0.7649 - Iou Other Natural Terrain: 0.4259 - Iou Vegetation: 0.7883 - Iou Development: 0.7211 - Iou Unknown: 0.7170 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 25 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Water | Accuracy Whitewater | Accuracy Sediment | Accuracy Other Natural Terrain | Accuracy Vegetation | Accuracy Development | Accuracy Unknown | Iou Water | Iou Whitewater | Iou Sediment | Iou Other Natural Terrain | Iou Vegetation | Iou Development | Iou Unknown | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:--------------:|:-------------------:|:-----------------:|:------------------------------:|:-------------------:|:--------------------:|:----------------:|:---------:|:--------------:|:------------:|:-------------------------:|:--------------:|:---------------:|:-----------:| | 1.7642 | 0.05 | 20 | 1.6699 | 0.1741 | 0.2887 | 0.4511 | 0.3629 | 0.3020 | 0.0122 | 0.0013 | 0.8998 | 0.1317 | 0.3106 | 0.3310 | 0.0708 | 0.0112 | 0.0013 | 0.3953 | 0.1007 | 0.3084 | | 1.6158 | 0.11 | 40 | 1.3903 | 0.1804 | 0.2783 | 0.5516 | 0.6957 | 0.0198 | 0.1032 | 0.0000 | 0.9605 | 0.1077 | 0.0616 | 0.5309 | 0.0184 | 0.0965 | 0.0000 | 0.4589 | 0.0973 | 0.0606 | | 1.3168 | 0.16 | 60 | 1.1710 | 0.2583 | 0.3483 | 0.6425 | 0.8324 | 0.0359 | 0.0669 | 0.0 | 0.9578 | 0.1157 | 0.4296 | 0.6688 | 0.0344 | 0.0630 | 0.0 | 0.5089 | 0.1094 | 0.4233 | | 1.1024 | 0.22 | 80 | 1.0398 | 0.3143 | 0.4032 | 0.6865 | 0.8815 | 0.1083 | 0.1413 | 0.0 | 0.9619 | 0.2673 | 0.4620 | 0.6970 | 0.1041 | 0.1261 | 0.0 | 0.5725 | 0.2452 | 0.4549 | | 1.0384 | 0.27 | 100 | 0.9307 | 0.3388 | 0.4315 | 0.7113 | 0.8919 | 0.0379 | 0.3662 | 0.0 | 0.9582 | 0.2753 | 0.4913 | 0.7137 | 0.0374 | 0.2526 | 0.0 | 0.6316 | 0.2550 | 0.4813 | | 0.9056 | 0.32 | 120 | 0.8649 | 0.3988 | 0.5060 | 0.7415 | 0.9191 | 0.1051 | 0.4270 | 0.0 | 0.8743 | 0.7201 | 0.4965 | 0.7159 | 0.1038 | 0.3178 | 0.0 | 0.6739 | 0.4951 | 0.4849 | | 1.1867 | 0.38 | 140 | 0.8470 | 0.4027 | 0.5076 | 0.7418 | 0.8363 | 0.0329 | 0.6586 | 0.0 | 0.9529 | 0.5761 | 0.4960 | 0.7494 | 0.0326 | 0.4722 | 0.0 | 0.6188 | 0.4643 | 0.4815 | | 1.2778 | 0.43 | 160 | 0.8108 | 0.4419 | 0.5491 | 0.7656 | 0.8973 | 0.1895 | 0.5864 | 0.0 | 0.9145 | 0.7679 | 0.4885 | 0.7758 | 0.1848 | 0.4732 | 0.0 | 0.6536 | 0.5269 | 0.4791 | | 0.8217 | 0.49 | 180 | 0.7507 | 0.4544 | 0.5750 | 0.7728 | 0.8801 | 0.1928 | 0.7543 | 0.0 | 0.8893 | 0.7924 | 0.5161 | 0.7912 | 0.1858 | 0.5100 | 0.0 | 0.6678 | 0.5398 | 0.4859 | | 0.9801 | 0.54 | 200 | 0.7149 | 0.4827 | 0.5995 | 0.7819 | 0.9016 | 0.3829 | 0.7254 | 0.0 | 0.8848 | 0.7904 | 0.5117 | 0.8007 | 0.3571 | 0.5615 | 0.0 | 0.6774 | 0.4859 | 0.4966 | | 0.7374 | 0.59 | 220 | 0.6885 | 0.4950 | 0.6159 | 0.7894 | 0.9068 | 0.3910 | 0.8448 | 0.0 | 0.8656 | 0.7859 | 0.5169 | 0.7839 | 0.3448 | 0.5749 | 0.0 | 0.6942 | 0.5774 | 0.4895 | | 1.0931 | 0.65 | 240 | 0.6884 | 0.4889 | 0.6134 | 0.7885 | 0.9106 | 0.3515 | 0.8590 | 0.0 | 0.8554 | 0.8118 | 0.5059 | 0.7804 | 0.3041 | 0.5561 | 0.0 | 0.7017 | 0.5858 | 0.4941 | | 0.7106 | 0.7 | 260 | 0.8052 | 0.4413 | 0.5511 | 0.7563 | 0.9400 | 0.3677 | 0.3526 | 0.0 | 0.8507 | 0.8081 | 0.5382 | 0.7137 | 0.3061 | 0.2825 | 0.0 | 0.6967 | 0.5709 | 0.5193 | | 0.7133 | 0.76 | 280 | 0.6507 | 0.5368 | 0.6542 | 0.8106 | 0.8931 | 0.6564 | 0.8631 | 0.0 | 0.9353 | 0.6824 | 0.5491 | 0.8105 | 0.5066 | 0.5893 | 0.0 | 0.7211 | 0.5953 | 0.5350 | | 0.5858 | 0.81 | 300 | 0.6587 | 0.5212 | 0.6453 | 0.7979 | 0.9158 | 0.6788 | 0.6528 | 0.0 | 0.8872 | 0.8580 | 0.5241 | 0.8226 | 0.5180 | 0.5725 | 0.0 | 0.6814 | 0.5427 | 0.5113 | | 1.9447 | 0.86 | 320 | 0.6674 | 0.5300 | 0.6268 | 0.8098 | 0.9182 | 0.4960 | 0.7161 | 0.0 | 0.9369 | 0.7516 | 0.5691 | 0.8130 | 0.4323 | 0.6061 | 0.0 | 0.6974 | 0.6098 | 0.5515 | | 0.6724 | 0.92 | 340 | 0.6814 | 0.5191 | 0.6635 | 0.7901 | 0.8573 | 0.6785 | 0.8412 | 0.0 | 0.8680 | 0.8745 | 0.5251 | 0.7797 | 0.5249 | 0.5634 | 0.0 | 0.7007 | 0.5512 | 0.5139 | | 0.6738 | 0.97 | 360 | 0.6131 | 0.5509 | 0.6663 | 0.8173 | 0.9235 | 0.6190 | 0.8401 | 0.0 | 0.8883 | 0.8535 | 0.5396 | 0.8125 | 0.5326 | 0.6451 | 0.0 | 0.7219 | 0.6215 | 0.5229 | | 0.7131 | 1.03 | 380 | 0.6163 | 0.5582 | 0.6734 | 0.8172 | 0.8994 | 0.7309 | 0.8445 | 0.0 | 0.9282 | 0.7832 | 0.5272 | 0.8228 | 0.5820 | 0.6580 | 0.0 | 0.7059 | 0.6249 | 0.5137 | | 0.8373 | 1.08 | 400 | 0.6077 | 0.5569 | 0.6737 | 0.8216 | 0.9115 | 0.7745 | 0.8120 | 0.0 | 0.9381 | 0.7219 | 0.5579 | 0.8242 | 0.5463 | 0.6524 | 0.0 | 0.7230 | 0.6089 | 0.5435 | | 0.7344 | 1.14 | 420 | 0.6830 | 0.5195 | 0.6628 | 0.7866 | 0.9541 | 0.6517 | 0.8366 | 0.0 | 0.7106 | 0.8978 | 0.5890 | 0.7943 | 0.5367 | 0.5415 | 0.0 | 0.6433 | 0.5580 | 0.5626 | | 0.4357 | 1.19 | 440 | 0.5908 | 0.5761 | 0.6867 | 0.8298 | 0.9183 | 0.7028 | 0.8313 | 0.0 | 0.9032 | 0.8332 | 0.6181 | 0.8259 | 0.5906 | 0.6593 | 0.0 | 0.7274 | 0.6289 | 0.6005 | | 0.3423 | 1.24 | 460 | 0.5857 | 0.5864 | 0.7004 | 0.8332 | 0.9054 | 0.7249 | 0.8626 | 0.0 | 0.8887 | 0.8351 | 0.6862 | 0.8155 | 0.6248 | 0.6348 | 0.0 | 0.7415 | 0.6395 | 0.6484 | | 0.5952 | 1.3 | 480 | 0.6290 | 0.5592 | 0.6560 | 0.8170 | 0.9176 | 0.7224 | 0.7925 | 0.0 | 0.9532 | 0.6852 | 0.5208 | 0.8244 | 0.6165 | 0.6594 | 0.0 | 0.7002 | 0.6048 | 0.5091 | | 0.7312 | 1.35 | 500 | 0.6103 | 0.5637 | 0.6829 | 0.8213 | 0.9303 | 0.7779 | 0.8384 | 0.0 | 0.8961 | 0.8243 | 0.5137 | 0.8265 | 0.6301 | 0.6668 | 0.0 | 0.7254 | 0.5968 | 0.5000 | | 0.4683 | 1.41 | 520 | 0.6372 | 0.5599 | 0.6809 | 0.8165 | 0.9056 | 0.7946 | 0.8165 | 0.0 | 0.9156 | 0.8230 | 0.5109 | 0.8273 | 0.6414 | 0.6765 | 0.0 | 0.7153 | 0.5598 | 0.4988 | | 0.3688 | 1.46 | 540 | 0.6608 | 0.5537 | 0.6561 | 0.8129 | 0.9313 | 0.7464 | 0.6851 | 0.0005 | 0.9196 | 0.7613 | 0.5485 | 0.7956 | 0.6248 | 0.5633 | 0.0005 | 0.7172 | 0.6407 | 0.5340 | | 0.3681 | 1.51 | 560 | 0.5841 | 0.5800 | 0.7074 | 0.8296 | 0.9114 | 0.8080 | 0.8613 | 0.0000 | 0.8767 | 0.8714 | 0.6232 | 0.8230 | 0.6089 | 0.6616 | 0.0000 | 0.7287 | 0.6480 | 0.5899 | | 0.455 | 1.57 | 580 | 0.6379 | 0.5682 | 0.6749 | 0.8230 | 0.9313 | 0.7373 | 0.8058 | 0.0008 | 0.9130 | 0.8057 | 0.5305 | 0.8181 | 0.6047 | 0.6666 | 0.0008 | 0.7166 | 0.6519 | 0.5186 | | 0.57 | 1.62 | 600 | 0.6002 | 0.5727 | 0.6983 | 0.8273 | 0.9134 | 0.8255 | 0.8873 | 0.0001 | 0.9089 | 0.8164 | 0.5364 | 0.8227 | 0.5932 | 0.6616 | 0.0001 | 0.7306 | 0.6692 | 0.5317 | | 0.3516 | 1.68 | 620 | 0.5615 | 0.5862 | 0.7111 | 0.8336 | 0.9078 | 0.8340 | 0.8216 | 0.0012 | 0.8920 | 0.8649 | 0.6558 | 0.8358 | 0.5994 | 0.6911 | 0.0012 | 0.7225 | 0.6421 | 0.6113 | | 0.4446 | 1.73 | 640 | 0.5702 | 0.5957 | 0.6972 | 0.8386 | 0.9039 | 0.7710 | 0.8483 | 0.0002 | 0.9370 | 0.7354 | 0.6850 | 0.8382 | 0.6356 | 0.6933 | 0.0002 | 0.7234 | 0.6237 | 0.6553 | | 1.1138 | 1.78 | 660 | 0.5697 | 0.5979 | 0.7156 | 0.8434 | 0.9065 | 0.7799 | 0.8291 | 0.0001 | 0.8949 | 0.8463 | 0.7525 | 0.8289 | 0.6005 | 0.6542 | 0.0001 | 0.7518 | 0.6442 | 0.7056 | | 0.5918 | 1.84 | 680 | 0.5167 | 0.5994 | 0.7094 | 0.8458 | 0.9394 | 0.7790 | 0.8572 | 0.0047 | 0.8996 | 0.8256 | 0.6606 | 0.8389 | 0.5879 | 0.6878 | 0.0047 | 0.7481 | 0.6835 | 0.6452 | | 0.4778 | 1.89 | 700 | 0.5767 | 0.5960 | 0.7142 | 0.8340 | 0.8730 | 0.8111 | 0.8841 | 0.0080 | 0.9204 | 0.8301 | 0.6728 | 0.8068 | 0.6549 | 0.6153 | 0.0080 | 0.7425 | 0.6861 | 0.6583 | | 0.6689 | 1.95 | 720 | 0.5420 | 0.6104 | 0.7221 | 0.8482 | 0.9085 | 0.8333 | 0.8307 | 0.0255 | 0.9173 | 0.7881 | 0.7511 | 0.8325 | 0.6055 | 0.7094 | 0.0254 | 0.7499 | 0.6525 | 0.6974 | | 1.893 | 2.0 | 740 | 0.5951 | 0.5883 | 0.7054 | 0.8367 | 0.9266 | 0.7935 | 0.7975 | 0.0057 | 0.9013 | 0.9111 | 0.6018 | 0.8317 | 0.6187 | 0.6962 | 0.0057 | 0.7446 | 0.6347 | 0.5865 | | 0.3762 | 2.05 | 760 | 0.6041 | 0.5633 | 0.6763 | 0.8229 | 0.9138 | 0.6893 | 0.8762 | 0.0087 | 0.9142 | 0.7735 | 0.5580 | 0.8233 | 0.5721 | 0.6527 | 0.0087 | 0.7272 | 0.6313 | 0.5274 | | 0.5576 | 2.11 | 780 | 0.5651 | 0.5721 | 0.7097 | 0.8199 | 0.8804 | 0.8350 | 0.8177 | 0.0152 | 0.8769 | 0.9241 | 0.6184 | 0.8135 | 0.6197 | 0.6973 | 0.0152 | 0.7296 | 0.5934 | 0.5359 | | 0.4714 | 2.16 | 800 | 0.4983 | 0.6253 | 0.7352 | 0.8582 | 0.9377 | 0.8134 | 0.8719 | 0.0410 | 0.8881 | 0.8385 | 0.7560 | 0.8409 | 0.6150 | 0.7200 | 0.0409 | 0.7729 | 0.6906 | 0.6967 | | 1.2051 | 2.22 | 820 | 0.5054 | 0.6172 | 0.7256 | 0.8501 | 0.9141 | 0.7940 | 0.8392 | 0.0436 | 0.9049 | 0.8325 | 0.7506 | 0.8408 | 0.6495 | 0.7252 | 0.0435 | 0.7561 | 0.6197 | 0.6853 | | 0.2421 | 2.27 | 840 | 0.5026 | 0.6112 | 0.7294 | 0.8491 | 0.9194 | 0.8360 | 0.8872 | 0.0346 | 0.8828 | 0.7663 | 0.7797 | 0.8333 | 0.6152 | 0.6798 | 0.0345 | 0.7621 | 0.6667 | 0.6868 | | 0.6917 | 2.32 | 860 | 0.4947 | 0.6065 | 0.7343 | 0.8447 | 0.9111 | 0.8352 | 0.8533 | 0.0498 | 0.8766 | 0.8971 | 0.7168 | 0.8411 | 0.6384 | 0.6910 | 0.0496 | 0.7628 | 0.6024 | 0.6599 | | 0.2269 | 2.38 | 880 | 0.4963 | 0.6099 | 0.7354 | 0.8426 | 0.9023 | 0.8730 | 0.8372 | 0.1139 | 0.9161 | 0.8633 | 0.6419 | 0.8383 | 0.6218 | 0.7367 | 0.1128 | 0.7614 | 0.5984 | 0.5996 | | 0.6035 | 2.43 | 900 | 0.4550 | 0.6421 | 0.7362 | 0.8638 | 0.9284 | 0.7679 | 0.8624 | 0.0968 | 0.9286 | 0.8148 | 0.7544 | 0.8509 | 0.6474 | 0.7274 | 0.0964 | 0.7718 | 0.6700 | 0.7305 | | 0.8465 | 2.49 | 920 | 0.4764 | 0.6396 | 0.7425 | 0.8606 | 0.9218 | 0.7772 | 0.8555 | 0.0980 | 0.9038 | 0.8710 | 0.7704 | 0.8516 | 0.6517 | 0.7149 | 0.0978 | 0.7614 | 0.6709 | 0.7288 | | 0.3546 | 2.54 | 940 | 0.4636 | 0.6444 | 0.7360 | 0.8683 | 0.9343 | 0.7923 | 0.8181 | 0.0691 | 0.9320 | 0.7953 | 0.8108 | 0.8502 | 0.6426 | 0.7042 | 0.0690 | 0.7811 | 0.6783 | 0.7854 | | 0.5057 | 2.59 | 960 | 0.4754 | 0.6315 | 0.7412 | 0.8617 | 0.9241 | 0.8575 | 0.8383 | 0.0538 | 0.9121 | 0.8026 | 0.7999 | 0.8512 | 0.6429 | 0.6996 | 0.0537 | 0.7765 | 0.6575 | 0.7393 | | 0.2862 | 2.65 | 980 | 0.5106 | 0.6088 | 0.7152 | 0.8436 | 0.9366 | 0.7928 | 0.8561 | 0.0851 | 0.9149 | 0.8273 | 0.5938 | 0.8461 | 0.6602 | 0.7275 | 0.0849 | 0.7500 | 0.6152 | 0.5776 | | 0.4181 | 2.7 | 1000 | 0.5597 | 0.6053 | 0.7262 | 0.8386 | 0.9201 | 0.8461 | 0.8863 | 0.1007 | 0.9030 | 0.8624 | 0.5651 | 0.8481 | 0.6616 | 0.7152 | 0.1005 | 0.7379 | 0.6313 | 0.5422 | | 0.3954 | 2.76 | 1020 | 0.5037 | 0.6259 | 0.7259 | 0.8496 | 0.9245 | 0.8067 | 0.8656 | 0.1011 | 0.9178 | 0.7763 | 0.6895 | 0.8392 | 0.6596 | 0.7198 | 0.1007 | 0.7450 | 0.6707 | 0.6467 | | 0.254 | 2.81 | 1040 | 0.5001 | 0.6446 | 0.7660 | 0.8607 | 0.9098 | 0.8624 | 0.8650 | 0.1116 | 0.8660 | 0.9136 | 0.8339 | 0.8409 | 0.6434 | 0.7097 | 0.1109 | 0.7618 | 0.6799 | 0.7653 | | 0.4925 | 2.86 | 1060 | 0.5392 | 0.6221 | 0.7218 | 0.8537 | 0.9332 | 0.7817 | 0.8101 | 0.0580 | 0.9194 | 0.8478 | 0.7023 | 0.8312 | 0.6484 | 0.6910 | 0.0578 | 0.7736 | 0.6751 | 0.6773 | | 0.3821 | 2.92 | 1080 | 0.5041 | 0.6211 | 0.7373 | 0.8515 | 0.9228 | 0.8788 | 0.8460 | 0.0918 | 0.9179 | 0.8416 | 0.6624 | 0.8409 | 0.6212 | 0.7112 | 0.0913 | 0.7629 | 0.6833 | 0.6368 | | 0.3027 | 2.97 | 1100 | 0.4728 | 0.6427 | 0.7526 | 0.8604 | 0.9333 | 0.7614 | 0.8818 | 0.1308 | 0.8640 | 0.9228 | 0.7739 | 0.8447 | 0.6419 | 0.7208 | 0.1296 | 0.7681 | 0.6637 | 0.7300 | | 0.3572 | 3.03 | 1120 | 0.5109 | 0.6388 | 0.7469 | 0.8545 | 0.9342 | 0.8335 | 0.9024 | 0.1647 | 0.8906 | 0.8025 | 0.7001 | 0.8405 | 0.6553 | 0.7219 | 0.1612 | 0.7568 | 0.6722 | 0.6640 | | 0.6269 | 3.08 | 1140 | 0.4645 | 0.6641 | 0.7651 | 0.8679 | 0.9289 | 0.8038 | 0.8241 | 0.2209 | 0.9024 | 0.8788 | 0.7965 | 0.8590 | 0.6555 | 0.7373 | 0.2170 | 0.7714 | 0.6514 | 0.7571 | | 0.6726 | 3.14 | 1160 | 0.5041 | 0.6440 | 0.7509 | 0.8537 | 0.9274 | 0.8540 | 0.8209 | 0.2448 | 0.9249 | 0.8419 | 0.6420 | 0.8467 | 0.6484 | 0.7265 | 0.2219 | 0.7558 | 0.6836 | 0.6248 | | 0.2253 | 3.19 | 1180 | 0.4808 | 0.6661 | 0.7738 | 0.8672 | 0.9238 | 0.8341 | 0.8564 | 0.2223 | 0.8885 | 0.8999 | 0.7918 | 0.8512 | 0.6724 | 0.7252 | 0.2170 | 0.7739 | 0.6637 | 0.7590 | | 0.1953 | 3.24 | 1200 | 0.4971 | 0.6561 | 0.7559 | 0.8637 | 0.9387 | 0.8074 | 0.8676 | 0.1940 | 0.8993 | 0.8488 | 0.7360 | 0.8437 | 0.6742 | 0.7212 | 0.1899 | 0.7770 | 0.6727 | 0.7137 | | 0.3769 | 3.3 | 1220 | 0.4940 | 0.6666 | 0.7711 | 0.8688 | 0.9114 | 0.7625 | 0.8894 | 0.2404 | 0.8942 | 0.8456 | 0.8539 | 0.8469 | 0.6319 | 0.7291 | 0.2308 | 0.7824 | 0.6923 | 0.7527 | | 0.2919 | 3.35 | 1240 | 0.5256 | 0.6579 | 0.7656 | 0.8620 | 0.9147 | 0.8240 | 0.8073 | 0.2330 | 0.8878 | 0.8364 | 0.8558 | 0.8452 | 0.6436 | 0.7307 | 0.2249 | 0.7696 | 0.6866 | 0.7049 | | 0.8137 | 3.41 | 1260 | 0.4615 | 0.6636 | 0.7540 | 0.8717 | 0.9342 | 0.7923 | 0.8602 | 0.1674 | 0.9233 | 0.7903 | 0.8106 | 0.8536 | 0.6677 | 0.7304 | 0.1640 | 0.7837 | 0.6665 | 0.7792 | | 0.5517 | 3.46 | 1280 | 0.4785 | 0.6581 | 0.7697 | 0.8593 | 0.9169 | 0.8348 | 0.9019 | 0.2548 | 0.8847 | 0.8347 | 0.7602 | 0.8331 | 0.6732 | 0.7226 | 0.2237 | 0.7744 | 0.6886 | 0.6912 | | 0.3323 | 3.51 | 1300 | 0.4658 | 0.6783 | 0.7893 | 0.8657 | 0.8912 | 0.8488 | 0.8672 | 0.3312 | 0.9038 | 0.8458 | 0.8372 | 0.8292 | 0.6626 | 0.7546 | 0.3087 | 0.7827 | 0.6763 | 0.7342 | | 0.2235 | 3.57 | 1320 | 0.4687 | 0.6690 | 0.7668 | 0.8669 | 0.9418 | 0.8131 | 0.8607 | 0.2613 | 0.8989 | 0.8500 | 0.7420 | 0.8448 | 0.6721 | 0.7273 | 0.2491 | 0.7778 | 0.6867 | 0.7250 | | 0.4178 | 3.62 | 1340 | 0.5271 | 0.6617 | 0.7577 | 0.8631 | 0.9271 | 0.8110 | 0.8500 | 0.2745 | 0.9291 | 0.7756 | 0.7368 | 0.8565 | 0.6700 | 0.7388 | 0.2480 | 0.7648 | 0.6509 | 0.7031 | | 0.1709 | 3.68 | 1360 | 0.4917 | 0.6743 | 0.7839 | 0.8666 | 0.9345 | 0.8278 | 0.8188 | 0.3626 | 0.8856 | 0.8878 | 0.7699 | 0.8586 | 0.6593 | 0.7338 | 0.3077 | 0.7715 | 0.6556 | 0.7334 | | 0.5981 | 3.73 | 1380 | 0.5301 | 0.6598 | 0.7651 | 0.8619 | 0.9562 | 0.8015 | 0.8527 | 0.3087 | 0.8823 | 0.8672 | 0.6870 | 0.8548 | 0.6651 | 0.7469 | 0.2630 | 0.7759 | 0.6539 | 0.6593 | | 0.3507 | 3.78 | 1400 | 0.5341 | 0.6544 | 0.7687 | 0.8543 | 0.9212 | 0.7396 | 0.8167 | 0.4255 | 0.8876 | 0.8486 | 0.7416 | 0.8517 | 0.6158 | 0.7268 | 0.3144 | 0.7615 | 0.6660 | 0.6448 | | 0.3053 | 3.84 | 1420 | 0.5660 | 0.6511 | 0.7660 | 0.8510 | 0.9112 | 0.7759 | 0.8743 | 0.3550 | 0.8915 | 0.8701 | 0.6838 | 0.8407 | 0.6537 | 0.7418 | 0.3087 | 0.7673 | 0.6443 | 0.6014 | | 0.4962 | 3.89 | 1440 | 0.5701 | 0.6535 | 0.7465 | 0.8620 | 0.9387 | 0.7989 | 0.8546 | 0.2405 | 0.9323 | 0.7459 | 0.7148 | 0.8556 | 0.6701 | 0.7382 | 0.2139 | 0.7709 | 0.6388 | 0.6870 | | 0.6165 | 3.95 | 1460 | 0.4963 | 0.6711 | 0.7622 | 0.8720 | 0.9393 | 0.7786 | 0.8285 | 0.2492 | 0.9198 | 0.8354 | 0.7850 | 0.8648 | 0.6480 | 0.7461 | 0.2351 | 0.7754 | 0.6862 | 0.7418 | | 0.2898 | 4.0 | 1480 | 0.4906 | 0.6751 | 0.7688 | 0.8691 | 0.9254 | 0.7811 | 0.8763 | 0.3229 | 0.9206 | 0.7502 | 0.8053 | 0.8583 | 0.6574 | 0.7458 | 0.2937 | 0.7713 | 0.6572 | 0.7421 | | 0.2248 | 4.05 | 1500 | 0.5393 | 0.6627 | 0.7700 | 0.8636 | 0.9304 | 0.7995 | 0.8683 | 0.2900 | 0.8898 | 0.8543 | 0.7578 | 0.8540 | 0.6586 | 0.7258 | 0.2434 | 0.7721 | 0.6835 | 0.7014 | | 0.2432 | 4.11 | 1520 | 0.5233 | 0.6732 | 0.7773 | 0.8627 | 0.9456 | 0.7776 | 0.8200 | 0.4175 | 0.8811 | 0.8671 | 0.7321 | 0.8384 | 0.6516 | 0.7257 | 0.3281 | 0.7739 | 0.7027 | 0.6919 | | 0.3847 | 4.16 | 1540 | 0.5011 | 0.6842 | 0.7816 | 0.8757 | 0.9203 | 0.7606 | 0.8733 | 0.3186 | 0.9076 | 0.8372 | 0.8534 | 0.8533 | 0.6504 | 0.7369 | 0.2841 | 0.7927 | 0.6897 | 0.7826 | | 0.3696 | 4.22 | 1560 | 0.4968 | 0.6889 | 0.7971 | 0.8767 | 0.9334 | 0.8285 | 0.8607 | 0.3609 | 0.8816 | 0.8655 | 0.8492 | 0.8566 | 0.6483 | 0.7309 | 0.3169 | 0.7953 | 0.6897 | 0.7848 | | 0.6256 | 4.27 | 1580 | 0.5060 | 0.6920 | 0.7930 | 0.8755 | 0.9256 | 0.7798 | 0.8598 | 0.3865 | 0.8937 | 0.8652 | 0.8407 | 0.8563 | 0.6671 | 0.7406 | 0.3277 | 0.7866 | 0.6882 | 0.7773 | | 0.123 | 4.32 | 1600 | 0.5031 | 0.6911 | 0.7886 | 0.8762 | 0.9247 | 0.7878 | 0.8605 | 0.3802 | 0.9060 | 0.7956 | 0.8651 | 0.8568 | 0.6607 | 0.7431 | 0.3339 | 0.7893 | 0.6757 | 0.7782 | | 0.4976 | 4.38 | 1620 | 0.5683 | 0.6833 | 0.7880 | 0.8669 | 0.9163 | 0.8146 | 0.8619 | 0.4185 | 0.9166 | 0.8540 | 0.7338 | 0.8554 | 0.6671 | 0.7501 | 0.3581 | 0.7683 | 0.6807 | 0.7033 | | 0.3203 | 4.43 | 1640 | 0.5254 | 0.6852 | 0.7749 | 0.8705 | 0.9330 | 0.7844 | 0.8444 | 0.3818 | 0.9296 | 0.8045 | 0.7469 | 0.8568 | 0.6690 | 0.7506 | 0.3494 | 0.7741 | 0.6708 | 0.7259 | | 0.233 | 4.49 | 1660 | 0.5000 | 0.7018 | 0.8034 | 0.8800 | 0.9299 | 0.8099 | 0.8661 | 0.4532 | 0.9113 | 0.8396 | 0.8138 | 0.8593 | 0.6544 | 0.7502 | 0.3850 | 0.8005 | 0.6767 | 0.7866 | | 0.131 | 4.54 | 1680 | 0.5944 | 0.6698 | 0.7886 | 0.8628 | 0.9372 | 0.7772 | 0.8888 | 0.4697 | 0.8819 | 0.8800 | 0.6857 | 0.8514 | 0.6482 | 0.7408 | 0.3710 | 0.7992 | 0.6191 | 0.6589 | | 0.1867 | 4.59 | 1700 | 0.5355 | 0.6948 | 0.8107 | 0.8731 | 0.9344 | 0.7717 | 0.8889 | 0.5399 | 0.8620 | 0.8469 | 0.8311 | 0.8511 | 0.6522 | 0.7434 | 0.3764 | 0.7892 | 0.6924 | 0.7588 | | 0.2121 | 4.65 | 1720 | 0.5226 | 0.6934 | 0.7864 | 0.8759 | 0.9256 | 0.7900 | 0.8240 | 0.3943 | 0.9170 | 0.7991 | 0.8549 | 0.8562 | 0.6568 | 0.7473 | 0.3600 | 0.7847 | 0.6623 | 0.7868 | | 0.4442 | 4.7 | 1740 | 0.5122 | 0.7049 | 0.8078 | 0.8802 | 0.9236 | 0.8008 | 0.8868 | 0.4509 | 0.8927 | 0.8270 | 0.8730 | 0.8555 | 0.6574 | 0.7461 | 0.3940 | 0.7971 | 0.6929 | 0.7914 | | 0.2561 | 4.76 | 1760 | 0.5097 | 0.6952 | 0.8068 | 0.8723 | 0.9096 | 0.8027 | 0.8991 | 0.4735 | 0.8852 | 0.8207 | 0.8569 | 0.8460 | 0.6619 | 0.7383 | 0.3849 | 0.7870 | 0.6949 | 0.7534 | | 0.3744 | 4.81 | 1780 | 0.5762 | 0.6562 | 0.7702 | 0.8561 | 0.9452 | 0.7934 | 0.8702 | 0.4003 | 0.8877 | 0.8651 | 0.6295 | 0.8485 | 0.6679 | 0.7432 | 0.3079 | 0.7800 | 0.6413 | 0.6049 | | 0.2373 | 4.86 | 1800 | 0.5477 | 0.6547 | 0.7715 | 0.8550 | 0.9341 | 0.8386 | 0.8405 | 0.4085 | 0.9097 | 0.8517 | 0.6177 | 0.8506 | 0.6647 | 0.7467 | 0.3169 | 0.7802 | 0.6353 | 0.5885 | | 0.1851 | 4.92 | 1820 | 0.5771 | 0.6565 | 0.7735 | 0.8503 | 0.9038 | 0.8391 | 0.8573 | 0.4305 | 0.9152 | 0.7979 | 0.6707 | 0.8329 | 0.6692 | 0.7400 | 0.3561 | 0.7732 | 0.6264 | 0.5974 | | 0.3411 | 4.97 | 1840 | 0.5119 | 0.6813 | 0.7933 | 0.8647 | 0.9168 | 0.8103 | 0.8401 | 0.4585 | 0.9003 | 0.8885 | 0.7387 | 0.8526 | 0.6616 | 0.7445 | 0.3726 | 0.7722 | 0.6832 | 0.6821 | | 0.1627 | 5.03 | 1860 | 0.5401 | 0.6720 | 0.7865 | 0.8627 | 0.9415 | 0.7899 | 0.9051 | 0.4505 | 0.8736 | 0.8287 | 0.7158 | 0.8537 | 0.6649 | 0.7325 | 0.3189 | 0.7719 | 0.6777 | 0.6844 | | 0.4794 | 5.08 | 1880 | 0.5325 | 0.6793 | 0.7883 | 0.8638 | 0.9325 | 0.7814 | 0.8045 | 0.5115 | 0.8987 | 0.8550 | 0.7343 | 0.8504 | 0.6476 | 0.7384 | 0.3844 | 0.7738 | 0.6850 | 0.6754 | | 0.2968 | 5.14 | 1900 | 0.5264 | 0.6856 | 0.7945 | 0.8692 | 0.9215 | 0.8237 | 0.8716 | 0.4383 | 0.9035 | 0.8371 | 0.7659 | 0.8560 | 0.6579 | 0.7421 | 0.3546 | 0.7767 | 0.7052 | 0.7065 | | 0.1931 | 5.19 | 1920 | 0.4982 | 0.6967 | 0.8119 | 0.8715 | 0.9218 | 0.8092 | 0.8836 | 0.5354 | 0.8854 | 0.8679 | 0.7801 | 0.8595 | 0.6635 | 0.7479 | 0.4057 | 0.7762 | 0.7074 | 0.7163 | | 0.5028 | 5.24 | 1940 | 0.4865 | 0.7072 | 0.8092 | 0.8816 | 0.9241 | 0.8274 | 0.8591 | 0.4211 | 0.8943 | 0.8659 | 0.8723 | 0.8576 | 0.6624 | 0.7439 | 0.3886 | 0.7961 | 0.7090 | 0.7929 | | 0.1652 | 5.3 | 1960 | 0.5541 | 0.6710 | 0.7825 | 0.8574 | 0.8865 | 0.7954 | 0.8853 | 0.3600 | 0.8946 | 0.8783 | 0.7775 | 0.8322 | 0.6733 | 0.7533 | 0.3282 | 0.7663 | 0.6931 | 0.6504 | | 0.3028 | 5.35 | 1980 | 0.4632 | 0.6915 | 0.7915 | 0.8730 | 0.9299 | 0.8132 | 0.8737 | 0.3976 | 0.9063 | 0.8558 | 0.7641 | 0.8598 | 0.6714 | 0.7551 | 0.3663 | 0.7806 | 0.6894 | 0.7178 | | 0.2153 | 5.41 | 2000 | 0.6220 | 0.6541 | 0.7599 | 0.8520 | 0.9176 | 0.7605 | 0.8618 | 0.3876 | 0.9163 | 0.8224 | 0.6535 | 0.8475 | 0.6598 | 0.7495 | 0.3470 | 0.7667 | 0.6098 | 0.5987 | | 0.5976 | 5.46 | 2020 | 0.5749 | 0.6739 | 0.7944 | 0.8628 | 0.9262 | 0.8053 | 0.8888 | 0.4578 | 0.8673 | 0.8466 | 0.7687 | 0.8499 | 0.6627 | 0.7494 | 0.3538 | 0.7848 | 0.6277 | 0.6892 | | 0.1812 | 5.51 | 2040 | 0.5282 | 0.6862 | 0.7879 | 0.8738 | 0.9364 | 0.7998 | 0.8519 | 0.3850 | 0.8983 | 0.8406 | 0.8033 | 0.8570 | 0.6632 | 0.7337 | 0.3472 | 0.7966 | 0.6566 | 0.7492 | | 0.3064 | 5.57 | 2060 | 0.5309 | 0.6847 | 0.7931 | 0.8699 | 0.9108 | 0.8068 | 0.8459 | 0.3914 | 0.8923 | 0.8551 | 0.8496 | 0.8434 | 0.6659 | 0.7385 | 0.3538 | 0.7964 | 0.6594 | 0.7352 | | 0.2951 | 5.62 | 2080 | 0.5739 | 0.6811 | 0.7996 | 0.8617 | 0.8914 | 0.8086 | 0.8350 | 0.5265 | 0.8970 | 0.7884 | 0.8501 | 0.8313 | 0.6658 | 0.7353 | 0.4000 | 0.7862 | 0.6409 | 0.7082 | | 0.2031 | 5.68 | 2100 | 0.5522 | 0.6730 | 0.7927 | 0.8585 | 0.9065 | 0.8260 | 0.8216 | 0.4632 | 0.8839 | 0.8735 | 0.7741 | 0.8449 | 0.6658 | 0.7263 | 0.3915 | 0.7747 | 0.6401 | 0.6673 | | 0.1091 | 5.73 | 2120 | 0.5696 | 0.6742 | 0.7839 | 0.8621 | 0.9269 | 0.8129 | 0.8489 | 0.4785 | 0.9088 | 0.7822 | 0.7290 | 0.8612 | 0.6758 | 0.7399 | 0.3564 | 0.7655 | 0.6399 | 0.6809 | | 0.6339 | 5.78 | 2140 | 0.5735 | 0.6845 | 0.7946 | 0.8695 | 0.9315 | 0.8040 | 0.8301 | 0.4774 | 0.8990 | 0.8546 | 0.7657 | 0.8610 | 0.6691 | 0.7390 | 0.3747 | 0.7893 | 0.6461 | 0.7126 | | 0.1977 | 5.84 | 2160 | 0.6636 | 0.6630 | 0.7802 | 0.8553 | 0.9485 | 0.7973 | 0.8691 | 0.4899 | 0.8769 | 0.8494 | 0.6304 | 0.8548 | 0.6756 | 0.7360 | 0.3940 | 0.7771 | 0.6022 | 0.6014 | | 0.1821 | 5.89 | 2180 | 0.5528 | 0.6861 | 0.7945 | 0.8705 | 0.9207 | 0.8272 | 0.8640 | 0.4390 | 0.9102 | 0.8252 | 0.7750 | 0.8518 | 0.6572 | 0.7439 | 0.3868 | 0.7954 | 0.6522 | 0.7155 | | 0.3178 | 5.95 | 2200 | 0.4989 | 0.7168 | 0.8208 | 0.8843 | 0.9219 | 0.8031 | 0.8626 | 0.5303 | 0.9022 | 0.8588 | 0.8670 | 0.8654 | 0.6787 | 0.7565 | 0.4089 | 0.7958 | 0.6937 | 0.8186 | | 0.1903 | 6.0 | 2220 | 0.5606 | 0.6787 | 0.7902 | 0.8636 | 0.9309 | 0.8103 | 0.8611 | 0.5126 | 0.9116 | 0.8291 | 0.6759 | 0.8525 | 0.6757 | 0.7541 | 0.3701 | 0.7789 | 0.6681 | 0.6514 | | 0.2833 | 6.05 | 2240 | 0.5620 | 0.6807 | 0.7983 | 0.8644 | 0.9268 | 0.8434 | 0.8579 | 0.5070 | 0.9044 | 0.8698 | 0.6787 | 0.8547 | 0.6758 | 0.7542 | 0.3781 | 0.7802 | 0.6688 | 0.6529 | | 0.2418 | 6.11 | 2260 | 0.5505 | 0.6805 | 0.7898 | 0.8647 | 0.9422 | 0.7859 | 0.8901 | 0.5280 | 0.8952 | 0.7832 | 0.7041 | 0.8573 | 0.6716 | 0.7501 | 0.3849 | 0.7738 | 0.6539 | 0.6721 | | 0.2252 | 6.16 | 2280 | 0.5652 | 0.6728 | 0.7915 | 0.8613 | 0.9286 | 0.8299 | 0.8828 | 0.4556 | 0.8881 | 0.8772 | 0.6783 | 0.8577 | 0.6723 | 0.7418 | 0.3651 | 0.7720 | 0.6520 | 0.6488 | | 0.3011 | 6.22 | 2300 | 0.5430 | 0.6779 | 0.7898 | 0.8644 | 0.9412 | 0.8307 | 0.8624 | 0.4695 | 0.8928 | 0.8345 | 0.6974 | 0.8553 | 0.6649 | 0.7418 | 0.3685 | 0.7744 | 0.6718 | 0.6684 | | 0.2122 | 6.27 | 2320 | 0.5227 | 0.6807 | 0.7919 | 0.8644 | 0.9413 | 0.8013 | 0.8616 | 0.5139 | 0.8971 | 0.8590 | 0.6692 | 0.8619 | 0.6775 | 0.7510 | 0.3981 | 0.7753 | 0.6577 | 0.6438 | | 0.1844 | 6.32 | 2340 | 0.5499 | 0.6706 | 0.7838 | 0.8614 | 0.9320 | 0.7995 | 0.9013 | 0.4303 | 0.8958 | 0.8736 | 0.6541 | 0.8609 | 0.6771 | 0.7307 | 0.3629 | 0.7736 | 0.6543 | 0.6349 | | 0.2772 | 6.38 | 2360 | 0.5676 | 0.6693 | 0.7903 | 0.8585 | 0.9294 | 0.8483 | 0.8680 | 0.4617 | 0.8903 | 0.8894 | 0.6452 | 0.8541 | 0.6733 | 0.7376 | 0.3695 | 0.7694 | 0.6555 | 0.6254 | | 0.2566 | 6.43 | 2380 | 0.6250 | 0.6619 | 0.7766 | 0.8533 | 0.9367 | 0.8065 | 0.8631 | 0.4641 | 0.8907 | 0.8596 | 0.6153 | 0.8483 | 0.6630 | 0.7507 | 0.3953 | 0.7679 | 0.6294 | 0.5787 | | 0.3323 | 6.49 | 2400 | 0.5067 | 0.7154 | 0.8210 | 0.8836 | 0.9223 | 0.8125 | 0.9008 | 0.4915 | 0.8880 | 0.8616 | 0.8704 | 0.8611 | 0.6731 | 0.7533 | 0.4140 | 0.7975 | 0.7088 | 0.7998 | | 0.2489 | 6.54 | 2420 | 0.5678 | 0.7036 | 0.8133 | 0.8760 | 0.9063 | 0.8346 | 0.8668 | 0.5156 | 0.9032 | 0.7811 | 0.8853 | 0.8540 | 0.6566 | 0.7593 | 0.4189 | 0.7852 | 0.6809 | 0.7702 | | 0.2311 | 6.59 | 2440 | 0.4916 | 0.7172 | 0.8204 | 0.8842 | 0.9310 | 0.7975 | 0.8724 | 0.5405 | 0.8926 | 0.8430 | 0.8657 | 0.8610 | 0.6657 | 0.7604 | 0.4291 | 0.8003 | 0.7109 | 0.7931 | | 0.2477 | 6.65 | 2460 | 0.5204 | 0.7035 | 0.8050 | 0.8789 | 0.9197 | 0.7999 | 0.8689 | 0.4751 | 0.9077 | 0.7987 | 0.8651 | 0.8574 | 0.6639 | 0.7600 | 0.3805 | 0.7907 | 0.6869 | 0.7853 | | 0.1485 | 6.7 | 2480 | 0.4915 | 0.7107 | 0.8163 | 0.8813 | 0.9236 | 0.7980 | 0.8706 | 0.5387 | 0.9014 | 0.8269 | 0.8550 | 0.8622 | 0.6579 | 0.7607 | 0.4010 | 0.7895 | 0.7009 | 0.8027 | | 0.4349 | 6.76 | 2500 | 0.5276 | 0.7002 | 0.7970 | 0.8775 | 0.9329 | 0.7774 | 0.8702 | 0.4441 | 0.8984 | 0.8198 | 0.8364 | 0.8511 | 0.6592 | 0.7567 | 0.3786 | 0.7927 | 0.6918 | 0.7714 | | 0.3179 | 6.81 | 2520 | 0.5154 | 0.7088 | 0.8161 | 0.8808 | 0.9203 | 0.8226 | 0.8750 | 0.4807 | 0.8934 | 0.8627 | 0.8581 | 0.8562 | 0.6544 | 0.7548 | 0.3973 | 0.7935 | 0.7139 | 0.7916 | | 0.1755 | 6.86 | 2540 | 0.5192 | 0.7066 | 0.8041 | 0.8798 | 0.9429 | 0.7978 | 0.8730 | 0.4760 | 0.8888 | 0.8054 | 0.8447 | 0.8578 | 0.6745 | 0.7443 | 0.3984 | 0.7910 | 0.6896 | 0.7907 | | 0.3205 | 6.92 | 2560 | 0.5411 | 0.7057 | 0.8132 | 0.8799 | 0.9279 | 0.8140 | 0.8539 | 0.4790 | 0.8854 | 0.8720 | 0.8599 | 0.8584 | 0.6575 | 0.7472 | 0.3997 | 0.7957 | 0.6975 | 0.7837 | | 0.2455 | 6.97 | 2580 | 0.5374 | 0.7049 | 0.8050 | 0.8813 | 0.9305 | 0.8131 | 0.8690 | 0.4408 | 0.8994 | 0.8181 | 0.8642 | 0.8580 | 0.6640 | 0.7500 | 0.3693 | 0.7973 | 0.7041 | 0.7917 | | 0.1735 | 7.03 | 2600 | 0.5765 | 0.6996 | 0.8064 | 0.8777 | 0.9328 | 0.8060 | 0.8527 | 0.4451 | 0.8809 | 0.8905 | 0.8367 | 0.8585 | 0.6614 | 0.7508 | 0.3731 | 0.7923 | 0.6896 | 0.7711 | | 0.2041 | 7.08 | 2620 | 0.5683 | 0.6950 | 0.8024 | 0.8727 | 0.9287 | 0.7972 | 0.8557 | 0.5188 | 0.8987 | 0.8203 | 0.7976 | 0.8582 | 0.6629 | 0.7510 | 0.3927 | 0.7829 | 0.6852 | 0.7325 | | 0.8444 | 7.14 | 2640 | 0.5634 | 0.7047 | 0.8150 | 0.8789 | 0.9227 | 0.8060 | 0.8404 | 0.5489 | 0.8972 | 0.8200 | 0.8697 | 0.8613 | 0.6505 | 0.7530 | 0.3844 | 0.7876 | 0.6964 | 0.7995 | | 0.0665 | 7.19 | 2660 | 0.6066 | 0.6905 | 0.7964 | 0.8716 | 0.9297 | 0.8205 | 0.8462 | 0.4586 | 0.8991 | 0.8252 | 0.7952 | 0.8595 | 0.6615 | 0.7441 | 0.3687 | 0.7771 | 0.6958 | 0.7269 | | 0.2395 | 7.24 | 2680 | 0.5959 | 0.6956 | 0.8116 | 0.8710 | 0.9214 | 0.8399 | 0.8723 | 0.5090 | 0.8846 | 0.8723 | 0.7816 | 0.8600 | 0.6627 | 0.7525 | 0.3950 | 0.7708 | 0.7096 | 0.7183 | | 0.2261 | 7.3 | 2700 | 0.6236 | 0.6986 | 0.7994 | 0.8737 | 0.9228 | 0.8204 | 0.8664 | 0.4665 | 0.9133 | 0.8226 | 0.7837 | 0.8597 | 0.6700 | 0.7553 | 0.4027 | 0.7777 | 0.7046 | 0.7206 | | 0.2801 | 7.35 | 2720 | 0.6171 | 0.6922 | 0.7981 | 0.8709 | 0.9289 | 0.7876 | 0.8769 | 0.5005 | 0.9021 | 0.8327 | 0.7577 | 0.8623 | 0.6730 | 0.7564 | 0.3879 | 0.7792 | 0.6906 | 0.6958 | | 0.1989 | 7.41 | 2740 | 0.6196 | 0.6891 | 0.7930 | 0.8688 | 0.9366 | 0.8017 | 0.8474 | 0.4827 | 0.8997 | 0.8451 | 0.7376 | 0.8572 | 0.6661 | 0.7588 | 0.3950 | 0.7761 | 0.6902 | 0.6806 | | 0.1384 | 7.46 | 2760 | 0.5358 | 0.6956 | 0.7983 | 0.8731 | 0.9348 | 0.7960 | 0.8832 | 0.4893 | 0.9028 | 0.8235 | 0.7587 | 0.8634 | 0.6720 | 0.7613 | 0.4028 | 0.7837 | 0.6886 | 0.6974 | | 0.3099 | 7.51 | 2780 | 0.5289 | 0.6903 | 0.8027 | 0.8684 | 0.9217 | 0.8230 | 0.8747 | 0.5106 | 0.8992 | 0.8399 | 0.7498 | 0.8586 | 0.6657 | 0.7409 | 0.4050 | 0.7746 | 0.6982 | 0.6892 | | 0.2237 | 7.57 | 2800 | 0.6377 | 0.6770 | 0.7940 | 0.8621 | 0.9389 | 0.8274 | 0.9028 | 0.4578 | 0.8726 | 0.8858 | 0.6727 | 0.8530 | 0.6822 | 0.7126 | 0.3779 | 0.7717 | 0.6929 | 0.6486 | | 0.2499 | 7.62 | 2820 | 0.6043 | 0.6864 | 0.7913 | 0.8675 | 0.9328 | 0.8219 | 0.8395 | 0.4609 | 0.9035 | 0.8488 | 0.7320 | 0.8536 | 0.6633 | 0.7578 | 0.3877 | 0.7766 | 0.6927 | 0.6732 | | 0.1981 | 7.68 | 2840 | 0.6478 | 0.6858 | 0.7856 | 0.8661 | 0.9310 | 0.7797 | 0.8623 | 0.4875 | 0.9099 | 0.7917 | 0.7374 | 0.8521 | 0.6655 | 0.7638 | 0.3883 | 0.7693 | 0.6831 | 0.6786 | | 1.2055 | 7.73 | 2860 | 0.5979 | 0.6911 | 0.7992 | 0.8706 | 0.9300 | 0.8253 | 0.8816 | 0.4654 | 0.8909 | 0.8219 | 0.7795 | 0.8527 | 0.6682 | 0.7587 | 0.3706 | 0.7798 | 0.6934 | 0.7141 | | 0.177 | 7.78 | 2880 | 0.5908 | 0.6825 | 0.7887 | 0.8733 | 0.9374 | 0.8157 | 0.8902 | 0.3178 | 0.8718 | 0.8589 | 0.8291 | 0.8494 | 0.6710 | 0.7346 | 0.2802 | 0.7931 | 0.6877 | 0.7617 | | 0.1597 | 7.84 | 2900 | 0.5456 | 0.6964 | 0.8091 | 0.8749 | 0.9183 | 0.8190 | 0.8836 | 0.4531 | 0.8842 | 0.8753 | 0.8304 | 0.8540 | 0.6721 | 0.7388 | 0.3674 | 0.7920 | 0.6876 | 0.7632 | | 0.0895 | 7.89 | 2920 | 0.5558 | 0.7070 | 0.8120 | 0.8793 | 0.9242 | 0.8089 | 0.8605 | 0.5186 | 0.9046 | 0.8327 | 0.8346 | 0.8596 | 0.6662 | 0.7577 | 0.4105 | 0.7944 | 0.6913 | 0.7691 | | 1.3217 | 7.95 | 2940 | 0.6992 | 0.6776 | 0.7911 | 0.8600 | 0.9214 | 0.8120 | 0.8512 | 0.5140 | 0.8996 | 0.8351 | 0.7045 | 0.8574 | 0.6686 | 0.7577 | 0.4098 | 0.7632 | 0.6431 | 0.6432 | | 0.1388 | 8.0 | 2960 | 0.6492 | 0.6768 | 0.7845 | 0.8607 | 0.9357 | 0.8100 | 0.8588 | 0.4899 | 0.8971 | 0.8021 | 0.6977 | 0.8596 | 0.6762 | 0.7552 | 0.4046 | 0.7614 | 0.6291 | 0.6513 | | 0.1877 | 8.05 | 2980 | 0.6229 | 0.6875 | 0.7901 | 0.8687 | 0.9413 | 0.8166 | 0.8688 | 0.4622 | 0.9071 | 0.8412 | 0.6936 | 0.8604 | 0.6896 | 0.7563 | 0.3977 | 0.7819 | 0.6606 | 0.6663 | | 0.3116 | 8.11 | 3000 | 0.5704 | 0.6917 | 0.7952 | 0.8714 | 0.9274 | 0.8182 | 0.8547 | 0.4501 | 0.9069 | 0.8421 | 0.7672 | 0.8611 | 0.6716 | 0.7554 | 0.3837 | 0.7788 | 0.6890 | 0.7022 | | 0.2879 | 8.16 | 3020 | 0.5835 | 0.7101 | 0.8126 | 0.8825 | 0.9291 | 0.8085 | 0.8748 | 0.4732 | 0.8971 | 0.8594 | 0.8460 | 0.8624 | 0.6764 | 0.7618 | 0.3972 | 0.8020 | 0.6937 | 0.7770 | | 0.5737 | 8.22 | 3040 | 0.6887 | 0.6726 | 0.7863 | 0.8600 | 0.9237 | 0.8025 | 0.8875 | 0.4794 | 0.9041 | 0.8379 | 0.6691 | 0.8562 | 0.6769 | 0.7674 | 0.3815 | 0.7770 | 0.6306 | 0.6186 | | 0.1903 | 8.27 | 3060 | 0.6567 | 0.6771 | 0.7978 | 0.8621 | 0.9350 | 0.8183 | 0.8513 | 0.5467 | 0.8830 | 0.8406 | 0.7096 | 0.8605 | 0.6709 | 0.7651 | 0.3852 | 0.7770 | 0.6328 | 0.6483 | | 0.2059 | 8.32 | 3080 | 0.6863 | 0.6680 | 0.7786 | 0.8598 | 0.9374 | 0.8010 | 0.8678 | 0.4307 | 0.8956 | 0.8557 | 0.6621 | 0.8551 | 0.6724 | 0.7583 | 0.3706 | 0.7836 | 0.6266 | 0.6095 | | 0.134 | 8.38 | 3100 | 0.6730 | 0.6726 | 0.7862 | 0.8615 | 0.9281 | 0.8169 | 0.8717 | 0.4374 | 0.8947 | 0.8647 | 0.6900 | 0.8577 | 0.6701 | 0.7512 | 0.3684 | 0.7750 | 0.6505 | 0.6353 | | 0.1744 | 8.43 | 3120 | 0.6429 | 0.6826 | 0.7952 | 0.8652 | 0.9261 | 0.8063 | 0.8860 | 0.4900 | 0.8978 | 0.8551 | 0.7051 | 0.8608 | 0.6784 | 0.7613 | 0.3879 | 0.7774 | 0.6671 | 0.6451 | | 0.4322 | 8.49 | 3140 | 0.5696 | 0.7130 | 0.8214 | 0.8841 | 0.9220 | 0.8150 | 0.8894 | 0.5038 | 0.8975 | 0.8703 | 0.8518 | 0.8658 | 0.6691 | 0.7633 | 0.4034 | 0.8054 | 0.7030 | 0.7812 | | 0.2194 | 8.54 | 3160 | 0.5893 | 0.7189 | 0.8264 | 0.8856 | 0.9297 | 0.8216 | 0.8633 | 0.5246 | 0.8828 | 0.8793 | 0.8839 | 0.8673 | 0.6742 | 0.7623 | 0.4170 | 0.7991 | 0.7138 | 0.7984 | | 0.1172 | 8.59 | 3180 | 0.5640 | 0.7158 | 0.8153 | 0.8856 | 0.9243 | 0.7961 | 0.8757 | 0.4922 | 0.9074 | 0.8395 | 0.8720 | 0.8658 | 0.6747 | 0.7642 | 0.4004 | 0.8027 | 0.7061 | 0.7968 | | 0.1652 | 8.65 | 3200 | 0.5842 | 0.6962 | 0.8040 | 0.8712 | 0.9302 | 0.8237 | 0.8609 | 0.5237 | 0.9023 | 0.8380 | 0.7494 | 0.8603 | 0.6589 | 0.7763 | 0.4185 | 0.7767 | 0.6953 | 0.6875 | | 0.4658 | 8.7 | 3220 | 0.6697 | 0.6758 | 0.7783 | 0.8636 | 0.9338 | 0.8064 | 0.8823 | 0.4319 | 0.9122 | 0.7787 | 0.7028 | 0.8632 | 0.6699 | 0.7520 | 0.3790 | 0.7664 | 0.6411 | 0.6587 | | 0.0752 | 8.76 | 3240 | 0.6035 | 0.7060 | 0.8125 | 0.8813 | 0.9218 | 0.8450 | 0.8659 | 0.4448 | 0.9000 | 0.8520 | 0.8579 | 0.8620 | 0.6622 | 0.7580 | 0.3709 | 0.7931 | 0.7034 | 0.7926 | | 0.3356 | 8.81 | 3260 | 0.5770 | 0.7120 | 0.8166 | 0.8844 | 0.9343 | 0.8245 | 0.8798 | 0.4381 | 0.8787 | 0.8943 | 0.8664 | 0.8636 | 0.6692 | 0.7580 | 0.3873 | 0.7999 | 0.7092 | 0.7968 | | 0.1724 | 8.86 | 3280 | 0.5592 | 0.7136 | 0.8141 | 0.8845 | 0.9307 | 0.8032 | 0.8869 | 0.4732 | 0.8946 | 0.8451 | 0.8653 | 0.8623 | 0.6718 | 0.7598 | 0.3997 | 0.8022 | 0.7091 | 0.7905 | | 0.158 | 8.92 | 3300 | 0.5446 | 0.7085 | 0.8110 | 0.8828 | 0.9316 | 0.8088 | 0.8938 | 0.4891 | 0.9008 | 0.7943 | 0.8588 | 0.8628 | 0.6714 | 0.7629 | 0.3823 | 0.8021 | 0.6885 | 0.7896 | | 0.2343 | 8.97 | 3320 | 0.5785 | 0.7095 | 0.8132 | 0.8813 | 0.9304 | 0.8096 | 0.8762 | 0.4783 | 0.8901 | 0.8654 | 0.8425 | 0.8621 | 0.6748 | 0.7569 | 0.4024 | 0.7963 | 0.7013 | 0.7725 | | 0.2144 | 9.03 | 3340 | 0.5870 | 0.7055 | 0.8064 | 0.8802 | 0.9282 | 0.8202 | 0.8745 | 0.4448 | 0.9035 | 0.8486 | 0.8252 | 0.8576 | 0.6765 | 0.7567 | 0.3876 | 0.8005 | 0.7014 | 0.7579 | | 0.1379 | 9.08 | 3360 | 0.5864 | 0.7083 | 0.8106 | 0.8813 | 0.9288 | 0.8241 | 0.8696 | 0.4435 | 0.8968 | 0.8834 | 0.8281 | 0.8592 | 0.6748 | 0.7617 | 0.4023 | 0.8025 | 0.6979 | 0.7600 | | 0.335 | 9.14 | 3380 | 0.6072 | 0.6871 | 0.7925 | 0.8707 | 0.9314 | 0.8154 | 0.8720 | 0.4397 | 0.9062 | 0.8387 | 0.7444 | 0.8533 | 0.6684 | 0.7545 | 0.3940 | 0.7999 | 0.6551 | 0.6847 | | 0.1489 | 9.19 | 3400 | 0.5636 | 0.7076 | 0.8125 | 0.8803 | 0.9247 | 0.8143 | 0.8720 | 0.4925 | 0.8996 | 0.8409 | 0.8434 | 0.8571 | 0.6642 | 0.7563 | 0.4017 | 0.7987 | 0.7074 | 0.7683 | | 0.1118 | 9.24 | 3420 | 0.5699 | 0.7112 | 0.8188 | 0.8824 | 0.9313 | 0.7996 | 0.8835 | 0.4937 | 0.8760 | 0.8831 | 0.8648 | 0.8635 | 0.6695 | 0.7542 | 0.3980 | 0.7961 | 0.7081 | 0.7886 | | 0.2613 | 9.3 | 3440 | 0.5491 | 0.7160 | 0.8154 | 0.8853 | 0.9373 | 0.7939 | 0.8859 | 0.5014 | 0.8951 | 0.8444 | 0.8495 | 0.8663 | 0.6746 | 0.7597 | 0.4109 | 0.8012 | 0.7032 | 0.7961 | | 0.1139 | 9.35 | 3460 | 0.5690 | 0.7072 | 0.8052 | 0.8839 | 0.9280 | 0.7892 | 0.8610 | 0.4264 | 0.9060 | 0.8652 | 0.8606 | 0.8616 | 0.6554 | 0.7589 | 0.3670 | 0.8032 | 0.7104 | 0.7943 | | 0.128 | 9.41 | 3480 | 0.5414 | 0.7183 | 0.8197 | 0.8861 | 0.9360 | 0.7843 | 0.8709 | 0.5102 | 0.8850 | 0.8804 | 0.8710 | 0.8629 | 0.6689 | 0.7694 | 0.4104 | 0.8038 | 0.7141 | 0.7983 | | 0.187 | 9.46 | 3500 | 0.5634 | 0.7112 | 0.8119 | 0.8827 | 0.9281 | 0.8073 | 0.8583 | 0.5098 | 0.9113 | 0.8225 | 0.8460 | 0.8624 | 0.6713 | 0.7650 | 0.4111 | 0.8040 | 0.6927 | 0.7722 | | 0.1191 | 9.51 | 3520 | 0.5511 | 0.7118 | 0.8156 | 0.8833 | 0.9328 | 0.8120 | 0.8616 | 0.4826 | 0.8900 | 0.8745 | 0.8556 | 0.8633 | 0.6729 | 0.7560 | 0.3983 | 0.8014 | 0.7061 | 0.7843 | | 0.4297 | 9.57 | 3540 | 0.5802 | 0.7006 | 0.7967 | 0.8801 | 0.9276 | 0.8095 | 0.8714 | 0.4200 | 0.9130 | 0.7672 | 0.8685 | 0.8612 | 0.6770 | 0.7381 | 0.3544 | 0.7939 | 0.6826 | 0.7970 | | 0.2728 | 9.62 | 3560 | 0.5485 | 0.7178 | 0.8280 | 0.8843 | 0.9207 | 0.8378 | 0.8643 | 0.5383 | 0.8932 | 0.8698 | 0.8720 | 0.8608 | 0.6697 | 0.7499 | 0.4192 | 0.8001 | 0.7244 | 0.8006 | | 0.5411 | 9.68 | 3580 | 0.6300 | 0.7000 | 0.8024 | 0.8732 | 0.9223 | 0.7995 | 0.8569 | 0.5270 | 0.9113 | 0.7974 | 0.8021 | 0.8620 | 0.6733 | 0.7490 | 0.4075 | 0.7735 | 0.7035 | 0.7310 | | 0.2359 | 9.73 | 3600 | 0.5608 | 0.6990 | 0.8111 | 0.8740 | 0.9251 | 0.8178 | 0.8712 | 0.5368 | 0.8979 | 0.8466 | 0.7821 | 0.8635 | 0.6726 | 0.7488 | 0.4263 | 0.7908 | 0.6774 | 0.7137 | | 0.1735 | 9.78 | 3620 | 0.5859 | 0.6998 | 0.8132 | 0.8744 | 0.9253 | 0.8187 | 0.8722 | 0.5364 | 0.8913 | 0.8545 | 0.7939 | 0.8603 | 0.6693 | 0.7457 | 0.4366 | 0.7965 | 0.6678 | 0.7221 | | 0.1954 | 9.84 | 3640 | 0.6579 | 0.6788 | 0.7805 | 0.8646 | 0.9368 | 0.7730 | 0.8630 | 0.4399 | 0.9025 | 0.8462 | 0.7018 | 0.8585 | 0.6737 | 0.7425 | 0.3960 | 0.7802 | 0.6567 | 0.6437 | | 0.2474 | 9.89 | 3660 | 0.5547 | 0.7076 | 0.8115 | 0.8793 | 0.9328 | 0.8057 | 0.8560 | 0.5065 | 0.8919 | 0.8565 | 0.8314 | 0.8576 | 0.6795 | 0.7413 | 0.4163 | 0.8001 | 0.7013 | 0.7571 | | 0.2478 | 9.95 | 3680 | 0.5778 | 0.6993 | 0.8030 | 0.8745 | 0.9296 | 0.8235 | 0.8403 | 0.4924 | 0.9045 | 0.8343 | 0.7963 | 0.8558 | 0.6762 | 0.7462 | 0.3948 | 0.7867 | 0.7038 | 0.7318 | | 0.1857 | 10.0 | 3700 | 0.5824 | 0.6996 | 0.8034 | 0.8754 | 0.9331 | 0.8167 | 0.8510 | 0.4584 | 0.8921 | 0.8765 | 0.7962 | 0.8572 | 0.6699 | 0.7504 | 0.3932 | 0.7864 | 0.7060 | 0.7339 | | 0.1102 | 10.05 | 3720 | 0.6500 | 0.6879 | 0.7825 | 0.8689 | 0.9331 | 0.7378 | 0.8607 | 0.4543 | 0.9084 | 0.8347 | 0.7483 | 0.8571 | 0.6603 | 0.7488 | 0.3909 | 0.7729 | 0.6937 | 0.6916 | | 0.1427 | 10.11 | 3740 | 0.5802 | 0.7147 | 0.8234 | 0.8816 | 0.9328 | 0.7885 | 0.8885 | 0.5626 | 0.8761 | 0.8680 | 0.8475 | 0.8632 | 0.6756 | 0.7543 | 0.4283 | 0.7925 | 0.7077 | 0.7810 | | 0.1652 | 10.16 | 3760 | 0.5950 | 0.7155 | 0.8094 | 0.8857 | 0.9367 | 0.7904 | 0.8237 | 0.4910 | 0.9086 | 0.8436 | 0.8716 | 0.8638 | 0.6698 | 0.7525 | 0.4112 | 0.8042 | 0.7072 | 0.8000 | | 0.1775 | 10.22 | 3780 | 0.5329 | 0.7040 | 0.8063 | 0.8783 | 0.9256 | 0.8012 | 0.8744 | 0.4887 | 0.9103 | 0.8444 | 0.7995 | 0.8641 | 0.6837 | 0.7596 | 0.4166 | 0.7994 | 0.6632 | 0.7417 | | 0.171 | 10.27 | 3800 | 0.5276 | 0.7048 | 0.8099 | 0.8780 | 0.9434 | 0.7878 | 0.8639 | 0.5423 | 0.8881 | 0.8459 | 0.7977 | 0.8636 | 0.6780 | 0.7581 | 0.4305 | 0.7999 | 0.6630 | 0.7404 | | 0.1445 | 10.32 | 3820 | 0.5358 | 0.7005 | 0.8175 | 0.8752 | 0.9294 | 0.8102 | 0.8479 | 0.5871 | 0.8890 | 0.8576 | 0.8013 | 0.8623 | 0.6748 | 0.7650 | 0.4096 | 0.7985 | 0.6626 | 0.7310 | | 0.164 | 10.38 | 3840 | 0.5509 | 0.7138 | 0.8202 | 0.8830 | 0.9317 | 0.8165 | 0.8737 | 0.5448 | 0.8952 | 0.8290 | 0.8504 | 0.8646 | 0.6748 | 0.7633 | 0.4240 | 0.8023 | 0.6916 | 0.7759 | | 0.172 | 10.43 | 3860 | 0.5750 | 0.7131 | 0.8161 | 0.8833 | 0.9307 | 0.7920 | 0.8881 | 0.5110 | 0.8915 | 0.8344 | 0.8648 | 0.8634 | 0.6728 | 0.7570 | 0.3961 | 0.7966 | 0.7096 | 0.7961 | | 0.9783 | 10.49 | 3880 | 0.6087 | 0.7004 | 0.8098 | 0.8772 | 0.9175 | 0.8388 | 0.8794 | 0.4381 | 0.8962 | 0.8719 | 0.8264 | 0.8535 | 0.6610 | 0.7513 | 0.3832 | 0.7953 | 0.6982 | 0.7601 | | 0.1288 | 10.54 | 3900 | 0.5240 | 0.7195 | 0.8230 | 0.8853 | 0.9195 | 0.8076 | 0.8856 | 0.5266 | 0.9002 | 0.8372 | 0.8842 | 0.8612 | 0.6755 | 0.7584 | 0.4267 | 0.8032 | 0.7134 | 0.7981 | | 0.1404 | 10.59 | 3920 | 0.5173 | 0.7066 | 0.8156 | 0.8777 | 0.9208 | 0.8130 | 0.8600 | 0.5573 | 0.9068 | 0.8372 | 0.8142 | 0.8564 | 0.6665 | 0.7663 | 0.4185 | 0.7969 | 0.6999 | 0.7418 | | 0.2075 | 10.65 | 3940 | 0.5605 | 0.7054 | 0.8160 | 0.8770 | 0.9290 | 0.8152 | 0.8608 | 0.5407 | 0.8912 | 0.8797 | 0.7959 | 0.8569 | 0.6722 | 0.7550 | 0.4321 | 0.8001 | 0.6952 | 0.7262 | | 0.0764 | 10.7 | 3960 | 0.5575 | 0.7052 | 0.8058 | 0.8787 | 0.9405 | 0.8024 | 0.8748 | 0.4783 | 0.8943 | 0.8612 | 0.7891 | 0.8576 | 0.6738 | 0.7555 | 0.4185 | 0.8013 | 0.7004 | 0.7294 | | 0.1381 | 10.76 | 3980 | 0.6212 | 0.6932 | 0.7998 | 0.8703 | 0.9258 | 0.8209 | 0.8836 | 0.4909 | 0.9076 | 0.8328 | 0.7371 | 0.8563 | 0.6728 | 0.7648 | 0.4115 | 0.7841 | 0.6864 | 0.6766 | | 0.1283 | 10.81 | 4000 | 0.5661 | 0.6961 | 0.8044 | 0.8710 | 0.9393 | 0.8002 | 0.8737 | 0.5322 | 0.8879 | 0.8644 | 0.7328 | 0.8558 | 0.6722 | 0.7631 | 0.4355 | 0.7882 | 0.6814 | 0.6768 | | 0.1478 | 10.86 | 4020 | 0.5682 | 0.7079 | 0.8085 | 0.8811 | 0.9397 | 0.8064 | 0.8768 | 0.4765 | 0.8955 | 0.8540 | 0.8108 | 0.8590 | 0.6730 | 0.7612 | 0.4063 | 0.8041 | 0.6991 | 0.7529 | | 0.2952 | 10.92 | 4040 | 0.5359 | 0.7250 | 0.8293 | 0.8888 | 0.9325 | 0.8091 | 0.8560 | 0.5720 | 0.8976 | 0.8615 | 0.8764 | 0.8724 | 0.6781 | 0.7756 | 0.4162 | 0.8019 | 0.7216 | 0.8091 | | 0.5411 | 10.97 | 4060 | 0.5361 | 0.7190 | 0.8235 | 0.8858 | 0.9223 | 0.8110 | 0.8714 | 0.5594 | 0.9099 | 0.8197 | 0.8711 | 0.8685 | 0.6779 | 0.7689 | 0.4029 | 0.7984 | 0.7110 | 0.8057 | | 0.2301 | 11.03 | 4080 | 0.5714 | 0.7126 | 0.8216 | 0.8823 | 0.9264 | 0.8299 | 0.8798 | 0.5285 | 0.8972 | 0.8570 | 0.8323 | 0.8606 | 0.6722 | 0.7591 | 0.4231 | 0.8066 | 0.7008 | 0.7656 | | 0.1374 | 11.08 | 4100 | 0.6145 | 0.6982 | 0.8072 | 0.8744 | 0.9313 | 0.8243 | 0.8528 | 0.5064 | 0.8976 | 0.8603 | 0.7777 | 0.8588 | 0.6655 | 0.7586 | 0.4031 | 0.7903 | 0.6957 | 0.7157 | | 0.0696 | 11.14 | 4120 | 0.6681 | 0.6889 | 0.7887 | 0.8713 | 0.9370 | 0.7977 | 0.8783 | 0.4220 | 0.9029 | 0.8375 | 0.7457 | 0.8601 | 0.6768 | 0.7550 | 0.3695 | 0.7831 | 0.6903 | 0.6872 | | 0.1682 | 11.19 | 4140 | 0.6977 | 0.6822 | 0.7875 | 0.8675 | 0.9324 | 0.8060 | 0.8618 | 0.4379 | 0.9069 | 0.8523 | 0.7155 | 0.8623 | 0.6758 | 0.7521 | 0.3759 | 0.7823 | 0.6688 | 0.6581 | | 0.094 | 11.24 | 4160 | 0.6576 | 0.6958 | 0.8044 | 0.8724 | 0.9350 | 0.8088 | 0.8627 | 0.5259 | 0.8956 | 0.8432 | 0.7598 | 0.8656 | 0.6738 | 0.7526 | 0.4046 | 0.7805 | 0.6950 | 0.6984 | | 0.1509 | 11.3 | 4180 | 0.6880 | 0.6966 | 0.8021 | 0.8720 | 0.9248 | 0.8019 | 0.8660 | 0.5110 | 0.9091 | 0.8516 | 0.7502 | 0.8669 | 0.6753 | 0.7664 | 0.4210 | 0.7793 | 0.6824 | 0.6852 | | 0.2837 | 11.35 | 4200 | 0.6709 | 0.6965 | 0.8002 | 0.8720 | 0.9336 | 0.7865 | 0.8636 | 0.5164 | 0.9019 | 0.8536 | 0.7456 | 0.8683 | 0.6763 | 0.7670 | 0.4232 | 0.7782 | 0.6790 | 0.6838 | | 0.1695 | 11.41 | 4220 | 0.6810 | 0.6960 | 0.8041 | 0.8721 | 0.9325 | 0.8124 | 0.8749 | 0.5153 | 0.8988 | 0.8507 | 0.7444 | 0.8695 | 0.6771 | 0.7619 | 0.4223 | 0.7803 | 0.6784 | 0.6828 | | 0.1717 | 11.46 | 4240 | 0.6574 | 0.6890 | 0.8055 | 0.8690 | 0.9305 | 0.8075 | 0.8640 | 0.5508 | 0.8933 | 0.8580 | 0.7346 | 0.8650 | 0.6748 | 0.7546 | 0.4148 | 0.7894 | 0.6507 | 0.6737 | | 0.2947 | 11.51 | 4260 | 0.6866 | 0.6883 | 0.7972 | 0.8703 | 0.9338 | 0.8128 | 0.8784 | 0.4646 | 0.8942 | 0.8533 | 0.7435 | 0.8638 | 0.6768 | 0.7540 | 0.3961 | 0.7886 | 0.6586 | 0.6802 | | 0.1125 | 11.57 | 4280 | 0.6372 | 0.6874 | 0.7962 | 0.8695 | 0.9286 | 0.7964 | 0.8834 | 0.4716 | 0.9004 | 0.8624 | 0.7303 | 0.8649 | 0.6792 | 0.7534 | 0.3993 | 0.7891 | 0.6568 | 0.6689 | | 0.2224 | 11.62 | 4300 | 0.6711 | 0.6870 | 0.8007 | 0.8668 | 0.9369 | 0.7884 | 0.8636 | 0.5381 | 0.8829 | 0.8759 | 0.7193 | 0.8609 | 0.6738 | 0.7665 | 0.4119 | 0.7823 | 0.6553 | 0.6586 | | 0.1141 | 11.68 | 4320 | 0.6735 | 0.6876 | 0.7964 | 0.8681 | 0.9282 | 0.7929 | 0.8708 | 0.5022 | 0.8997 | 0.8422 | 0.7391 | 0.8626 | 0.6761 | 0.7528 | 0.3917 | 0.7769 | 0.6745 | 0.6788 | | 0.1375 | 11.73 | 4340 | 0.6837 | 0.6863 | 0.7983 | 0.8666 | 0.9258 | 0.8016 | 0.8788 | 0.5143 | 0.8961 | 0.8359 | 0.7355 | 0.8606 | 0.6774 | 0.7453 | 0.4008 | 0.7761 | 0.6699 | 0.6742 | | 0.169 | 11.78 | 4360 | 0.6585 | 0.6925 | 0.7968 | 0.8712 | 0.9429 | 0.8017 | 0.8572 | 0.5097 | 0.9070 | 0.8599 | 0.6992 | 0.8650 | 0.6872 | 0.7562 | 0.4293 | 0.7933 | 0.6490 | 0.6676 | | 0.1052 | 11.84 | 4380 | 0.6623 | 0.6951 | 0.8108 | 0.8711 | 0.9314 | 0.8442 | 0.8677 | 0.5472 | 0.9004 | 0.8746 | 0.7099 | 0.8670 | 0.6841 | 0.7572 | 0.4333 | 0.7878 | 0.6607 | 0.6754 | | 0.2518 | 11.89 | 4400 | 0.6526 | 0.6826 | 0.7901 | 0.8648 | 0.9440 | 0.8438 | 0.8729 | 0.4834 | 0.9067 | 0.8347 | 0.6452 | 0.8557 | 0.6809 | 0.7546 | 0.4202 | 0.7815 | 0.6648 | 0.6201 | | 0.1708 | 11.95 | 4420 | 0.6365 | 0.6918 | 0.8027 | 0.8692 | 0.9400 | 0.8431 | 0.8710 | 0.5182 | 0.8936 | 0.8393 | 0.7140 | 0.8647 | 0.6812 | 0.7509 | 0.4215 | 0.7762 | 0.6746 | 0.6736 | | 0.1352 | 12.0 | 4440 | 0.5239 | 0.7166 | 0.8241 | 0.8839 | 0.9382 | 0.8144 | 0.8535 | 0.5460 | 0.8817 | 0.8874 | 0.8472 | 0.8663 | 0.6748 | 0.7577 | 0.4402 | 0.8035 | 0.6969 | 0.7769 | | 0.2215 | 12.05 | 4460 | 0.5723 | 0.7126 | 0.8105 | 0.8812 | 0.9360 | 0.8086 | 0.8765 | 0.5050 | 0.9054 | 0.8426 | 0.7994 | 0.8729 | 0.6912 | 0.7601 | 0.4197 | 0.7850 | 0.7110 | 0.7482 | | 0.1835 | 12.11 | 4480 | 0.5951 | 0.7038 | 0.8079 | 0.8764 | 0.9286 | 0.8200 | 0.8768 | 0.4997 | 0.9033 | 0.8435 | 0.7837 | 0.8646 | 0.6764 | 0.7581 | 0.4137 | 0.7838 | 0.7070 | 0.7228 | | 0.2029 | 12.16 | 4500 | 0.6254 | 0.7049 | 0.8123 | 0.8756 | 0.9360 | 0.8250 | 0.8460 | 0.5341 | 0.8887 | 0.8671 | 0.7890 | 0.8630 | 0.6719 | 0.7603 | 0.4181 | 0.7786 | 0.7167 | 0.7254 | | 0.1549 | 12.22 | 4520 | 0.6314 | 0.7073 | 0.8139 | 0.8782 | 0.9258 | 0.7967 | 0.8786 | 0.5177 | 0.8901 | 0.8628 | 0.8257 | 0.8652 | 0.6710 | 0.7511 | 0.4073 | 0.7822 | 0.7200 | 0.7542 | | 0.2682 | 12.27 | 4540 | 0.6696 | 0.7040 | 0.8131 | 0.8745 | 0.9259 | 0.8000 | 0.8625 | 0.5390 | 0.8867 | 0.8774 | 0.8001 | 0.8650 | 0.6781 | 0.7544 | 0.4102 | 0.7729 | 0.7174 | 0.7298 | | 0.1751 | 12.32 | 4560 | 0.6386 | 0.7053 | 0.8165 | 0.8751 | 0.9265 | 0.8053 | 0.8722 | 0.5559 | 0.8859 | 0.8757 | 0.7936 | 0.8665 | 0.6761 | 0.7562 | 0.4201 | 0.7745 | 0.7174 | 0.7264 | | 0.0681 | 12.38 | 4580 | 0.6112 | 0.7075 | 0.8127 | 0.8770 | 0.9252 | 0.8008 | 0.8658 | 0.5365 | 0.9002 | 0.8601 | 0.8001 | 0.8671 | 0.6788 | 0.7623 | 0.4154 | 0.7788 | 0.7181 | 0.7324 | | 0.1016 | 12.43 | 4600 | 0.6245 | 0.7053 | 0.8111 | 0.8769 | 0.9251 | 0.8187 | 0.8781 | 0.5013 | 0.8986 | 0.8562 | 0.7999 | 0.8659 | 0.6752 | 0.7600 | 0.4042 | 0.7804 | 0.7226 | 0.7289 | | 0.1233 | 12.49 | 4620 | 0.6009 | 0.7065 | 0.8072 | 0.8787 | 0.9320 | 0.8111 | 0.8669 | 0.4872 | 0.9005 | 0.8360 | 0.8165 | 0.8679 | 0.6797 | 0.7621 | 0.3988 | 0.7836 | 0.7122 | 0.7408 | | 0.2694 | 12.54 | 4640 | 0.6410 | 0.7066 | 0.8082 | 0.8787 | 0.9336 | 0.8113 | 0.8681 | 0.4848 | 0.8956 | 0.8512 | 0.8130 | 0.8656 | 0.6820 | 0.7604 | 0.3959 | 0.7846 | 0.7101 | 0.7476 | | 0.167 | 12.59 | 4660 | 0.6926 | 0.6951 | 0.7996 | 0.8719 | 0.9271 | 0.8188 | 0.8598 | 0.4841 | 0.9079 | 0.8389 | 0.7609 | 0.8642 | 0.6753 | 0.7625 | 0.4034 | 0.7781 | 0.6853 | 0.6971 | | 0.33 | 12.65 | 4680 | 0.6355 | 0.7048 | 0.8086 | 0.8789 | 0.9300 | 0.8155 | 0.8647 | 0.4891 | 0.9046 | 0.8525 | 0.8039 | 0.8676 | 0.6775 | 0.7614 | 0.4098 | 0.7950 | 0.6840 | 0.7382 | | 0.1355 | 12.7 | 4700 | 0.5896 | 0.7248 | 0.8201 | 0.8894 | 0.9348 | 0.8012 | 0.8679 | 0.4971 | 0.9002 | 0.8607 | 0.8787 | 0.8669 | 0.6813 | 0.7594 | 0.4294 | 0.8087 | 0.7242 | 0.8034 | | 0.2499 | 12.76 | 4720 | 0.5623 | 0.7264 | 0.8232 | 0.8898 | 0.9282 | 0.8038 | 0.8683 | 0.5214 | 0.9097 | 0.8563 | 0.8750 | 0.8676 | 0.6758 | 0.7634 | 0.4414 | 0.8102 | 0.7230 | 0.8030 | | 0.1029 | 12.81 | 4740 | 0.6712 | 0.7012 | 0.8100 | 0.8733 | 0.9299 | 0.7999 | 0.8721 | 0.5470 | 0.8969 | 0.8772 | 0.7470 | 0.8675 | 0.6750 | 0.7622 | 0.4465 | 0.7826 | 0.6886 | 0.6863 | | 0.1231 | 12.86 | 4760 | 0.7289 | 0.6920 | 0.8010 | 0.8693 | 0.9323 | 0.7921 | 0.8824 | 0.5139 | 0.8931 | 0.8699 | 0.7231 | 0.8641 | 0.6757 | 0.7559 | 0.4293 | 0.7811 | 0.6739 | 0.6644 | | 0.2009 | 12.92 | 4780 | 0.7378 | 0.6887 | 0.7979 | 0.8672 | 0.9382 | 0.8301 | 0.8803 | 0.5180 | 0.9033 | 0.8375 | 0.6782 | 0.8617 | 0.6827 | 0.7636 | 0.4322 | 0.7816 | 0.6604 | 0.6385 | | 0.1391 | 12.97 | 4800 | 0.6546 | 0.6991 | 0.7956 | 0.8739 | 0.9334 | 0.7969 | 0.8673 | 0.4618 | 0.9095 | 0.8435 | 0.7570 | 0.8628 | 0.6840 | 0.7659 | 0.4076 | 0.7813 | 0.6990 | 0.6933 | | 0.1298 | 13.03 | 4820 | 0.6830 | 0.6940 | 0.7989 | 0.8709 | 0.9401 | 0.8336 | 0.8854 | 0.4483 | 0.8887 | 0.8719 | 0.7243 | 0.8627 | 0.6936 | 0.7557 | 0.3974 | 0.7755 | 0.6930 | 0.6803 | | 0.1843 | 13.08 | 4840 | 0.7086 | 0.7003 | 0.8017 | 0.8742 | 0.9299 | 0.8174 | 0.8665 | 0.4689 | 0.9055 | 0.8676 | 0.7561 | 0.8648 | 0.6865 | 0.7617 | 0.4069 | 0.7793 | 0.7020 | 0.7011 | | 0.1717 | 13.14 | 4860 | 0.7067 | 0.6857 | 0.7925 | 0.8671 | 0.9345 | 0.8209 | 0.8780 | 0.4611 | 0.8993 | 0.8443 | 0.7094 | 0.8618 | 0.6842 | 0.7541 | 0.4024 | 0.7795 | 0.6640 | 0.6537 | | 0.1198 | 13.19 | 4880 | 0.6974 | 0.6874 | 0.7955 | 0.8660 | 0.9389 | 0.8013 | 0.8552 | 0.5258 | 0.8956 | 0.8557 | 0.6961 | 0.8599 | 0.6824 | 0.7611 | 0.4257 | 0.7790 | 0.6610 | 0.6424 | | 0.3224 | 13.24 | 4900 | 0.7392 | 0.6852 | 0.7930 | 0.8664 | 0.9267 | 0.8058 | 0.8745 | 0.4755 | 0.9041 | 0.8543 | 0.7103 | 0.8625 | 0.6817 | 0.7580 | 0.4026 | 0.7775 | 0.6629 | 0.6512 | | 0.0703 | 13.3 | 4920 | 0.6311 | 0.6933 | 0.8022 | 0.8709 | 0.9335 | 0.8123 | 0.8901 | 0.4878 | 0.8906 | 0.8626 | 0.7387 | 0.8660 | 0.6798 | 0.7560 | 0.4106 | 0.7802 | 0.6826 | 0.6782 | | 0.2431 | 13.35 | 4940 | 0.6244 | 0.6827 | 0.7976 | 0.8660 | 0.9368 | 0.8237 | 0.8843 | 0.4969 | 0.8942 | 0.8755 | 0.6720 | 0.8667 | 0.6838 | 0.7580 | 0.4172 | 0.7888 | 0.6334 | 0.6315 | | 0.2872 | 13.41 | 4960 | 0.6453 | 0.6839 | 0.8007 | 0.8661 | 0.9319 | 0.8085 | 0.8936 | 0.5322 | 0.8962 | 0.8619 | 0.6804 | 0.8669 | 0.6813 | 0.7582 | 0.4252 | 0.7899 | 0.6283 | 0.6373 | | 0.1403 | 13.46 | 4980 | 0.6293 | 0.6834 | 0.8007 | 0.8648 | 0.9383 | 0.7919 | 0.8843 | 0.5445 | 0.8806 | 0.8801 | 0.6849 | 0.8623 | 0.6767 | 0.7656 | 0.4277 | 0.7883 | 0.6326 | 0.6307 | | 1.0722 | 13.51 | 5000 | 0.6654 | 0.6859 | 0.7976 | 0.8675 | 0.9344 | 0.8211 | 0.8760 | 0.4980 | 0.8967 | 0.8460 | 0.7110 | 0.8603 | 0.6754 | 0.7626 | 0.4067 | 0.7873 | 0.6587 | 0.6504 | | 0.1002 | 13.57 | 5020 | 0.7546 | 0.6814 | 0.7922 | 0.8656 | 0.9375 | 0.8112 | 0.8670 | 0.4721 | 0.8929 | 0.8712 | 0.6935 | 0.8587 | 0.6765 | 0.7573 | 0.3976 | 0.7869 | 0.6532 | 0.6392 | | 0.1098 | 13.62 | 5040 | 0.7212 | 0.6966 | 0.8019 | 0.8733 | 0.9264 | 0.8044 | 0.8714 | 0.5056 | 0.9134 | 0.8447 | 0.7474 | 0.8680 | 0.6820 | 0.7642 | 0.4191 | 0.7892 | 0.6665 | 0.6874 | | 0.2066 | 13.68 | 5060 | 0.6863 | 0.6966 | 0.8103 | 0.8722 | 0.9330 | 0.7999 | 0.8691 | 0.5697 | 0.8934 | 0.8613 | 0.7454 | 0.8675 | 0.6773 | 0.7620 | 0.4325 | 0.7898 | 0.6612 | 0.6862 | | 0.1899 | 13.73 | 5080 | 0.6502 | 0.6979 | 0.8049 | 0.8732 | 0.9322 | 0.7924 | 0.8634 | 0.5399 | 0.9034 | 0.8523 | 0.7504 | 0.8681 | 0.6759 | 0.7628 | 0.4264 | 0.7855 | 0.6760 | 0.6909 | | 0.1979 | 13.78 | 5100 | 0.7042 | 0.6964 | 0.8077 | 0.8715 | 0.9303 | 0.8046 | 0.8646 | 0.5369 | 0.8919 | 0.8739 | 0.7514 | 0.8670 | 0.6755 | 0.7604 | 0.4293 | 0.7819 | 0.6746 | 0.6859 | | 0.101 | 13.84 | 5120 | 0.6623 | 0.6886 | 0.8036 | 0.8679 | 0.9312 | 0.8124 | 0.8591 | 0.5298 | 0.8907 | 0.8811 | 0.7211 | 0.8646 | 0.6761 | 0.7561 | 0.4335 | 0.7892 | 0.6432 | 0.6575 | | 0.2066 | 13.89 | 5140 | 0.6422 | 0.6874 | 0.7907 | 0.8690 | 0.9376 | 0.7826 | 0.8576 | 0.4964 | 0.9090 | 0.8384 | 0.7133 | 0.8647 | 0.6808 | 0.7606 | 0.4180 | 0.7917 | 0.6439 | 0.6519 | | 0.0987 | 13.95 | 5160 | 0.6607 | 0.6876 | 0.7963 | 0.8682 | 0.9312 | 0.8007 | 0.8677 | 0.4967 | 0.9030 | 0.8649 | 0.7095 | 0.8636 | 0.6820 | 0.7596 | 0.4225 | 0.7914 | 0.6458 | 0.6484 | | 0.1414 | 14.0 | 5180 | 0.6363 | 0.6908 | 0.8042 | 0.8691 | 0.9303 | 0.8263 | 0.8735 | 0.5228 | 0.8961 | 0.8547 | 0.7257 | 0.8644 | 0.6784 | 0.7596 | 0.4294 | 0.7875 | 0.6550 | 0.6616 | | 0.0547 | 14.05 | 5200 | 0.6666 | 0.6897 | 0.7980 | 0.8699 | 0.9303 | 0.8190 | 0.8796 | 0.4665 | 0.9009 | 0.8681 | 0.7216 | 0.8671 | 0.6806 | 0.7619 | 0.4074 | 0.7841 | 0.6626 | 0.6640 | | 1.5168 | 14.11 | 5220 | 0.6171 | 0.7057 | 0.8126 | 0.8766 | 0.9272 | 0.8151 | 0.8745 | 0.5186 | 0.8966 | 0.8696 | 0.7865 | 0.8692 | 0.6786 | 0.7633 | 0.4333 | 0.7839 | 0.6968 | 0.7147 | | 0.1673 | 14.16 | 5240 | 0.6295 | 0.6897 | 0.8097 | 0.8675 | 0.9364 | 0.8264 | 0.8935 | 0.5600 | 0.8800 | 0.8731 | 0.6989 | 0.8651 | 0.6819 | 0.7547 | 0.4471 | 0.7880 | 0.6358 | 0.6552 | | 0.1493 | 14.22 | 5260 | 0.6006 | 0.6862 | 0.7948 | 0.8689 | 0.9385 | 0.8056 | 0.8756 | 0.4804 | 0.8975 | 0.8563 | 0.7097 | 0.8645 | 0.6757 | 0.7584 | 0.4147 | 0.7924 | 0.6424 | 0.6556 | | 0.0582 | 14.27 | 5280 | 0.5851 | 0.7049 | 0.8050 | 0.8796 | 0.9356 | 0.8082 | 0.8594 | 0.4799 | 0.9075 | 0.8467 | 0.7975 | 0.8590 | 0.6705 | 0.7559 | 0.4123 | 0.8048 | 0.6970 | 0.7349 | | 0.114 | 14.32 | 5300 | 0.5824 | 0.7205 | 0.8271 | 0.8869 | 0.9228 | 0.8209 | 0.8661 | 0.5326 | 0.9004 | 0.8765 | 0.8705 | 0.8676 | 0.6727 | 0.7602 | 0.4219 | 0.8059 | 0.7183 | 0.7972 | | 0.1722 | 14.38 | 5320 | 0.6211 | 0.7146 | 0.8113 | 0.8857 | 0.9353 | 0.7938 | 0.8697 | 0.4707 | 0.9037 | 0.8576 | 0.8484 | 0.8671 | 0.6808 | 0.7568 | 0.4033 | 0.8081 | 0.7090 | 0.7771 | | 0.0931 | 14.43 | 5340 | 0.6304 | 0.7134 | 0.8145 | 0.8833 | 0.9351 | 0.7942 | 0.8683 | 0.5071 | 0.8974 | 0.8728 | 0.8267 | 0.8641 | 0.6812 | 0.7580 | 0.4198 | 0.8058 | 0.7037 | 0.7615 | | 0.1883 | 14.49 | 5360 | 0.6069 | 0.7148 | 0.8192 | 0.8832 | 0.9305 | 0.8042 | 0.8807 | 0.5439 | 0.9015 | 0.8448 | 0.8290 | 0.8628 | 0.6787 | 0.7639 | 0.4244 | 0.8060 | 0.7053 | 0.7625 | | 0.1474 | 14.54 | 5380 | 0.6856 | 0.6956 | 0.8127 | 0.8697 | 0.9282 | 0.8139 | 0.8909 | 0.5721 | 0.8863 | 0.8652 | 0.7324 | 0.8595 | 0.6753 | 0.7588 | 0.4309 | 0.7811 | 0.6875 | 0.6764 | | 0.1547 | 14.59 | 5400 | 0.7256 | 0.6997 | 0.8039 | 0.8721 | 0.9318 | 0.7957 | 0.8634 | 0.5218 | 0.8981 | 0.8713 | 0.7451 | 0.8622 | 0.6808 | 0.7673 | 0.4271 | 0.7774 | 0.6980 | 0.6855 | | 0.1503 | 14.65 | 5420 | 0.7249 | 0.6986 | 0.8017 | 0.8730 | 0.9313 | 0.8156 | 0.8755 | 0.4876 | 0.9032 | 0.8462 | 0.7522 | 0.8629 | 0.6729 | 0.7658 | 0.4166 | 0.7789 | 0.7031 | 0.6896 | | 0.3028 | 14.7 | 5440 | 0.7174 | 0.6935 | 0.8058 | 0.8697 | 0.9339 | 0.8206 | 0.8687 | 0.4987 | 0.8781 | 0.8941 | 0.7467 | 0.8642 | 0.6760 | 0.7566 | 0.4069 | 0.7706 | 0.6926 | 0.6874 | | 0.161 | 14.76 | 5460 | 0.6711 | 0.6985 | 0.8020 | 0.8726 | 0.9362 | 0.8113 | 0.8586 | 0.4919 | 0.8954 | 0.8701 | 0.7504 | 0.8628 | 0.6783 | 0.7575 | 0.4173 | 0.7783 | 0.7059 | 0.6895 | | 0.1155 | 14.81 | 5480 | 0.6929 | 0.7092 | 0.8132 | 0.8781 | 0.9297 | 0.8222 | 0.8547 | 0.5190 | 0.8997 | 0.8725 | 0.7948 | 0.8689 | 0.6824 | 0.7599 | 0.4231 | 0.7809 | 0.7219 | 0.7272 | | 0.1656 | 14.86 | 5500 | 0.6374 | 0.7092 | 0.8154 | 0.8779 | 0.9309 | 0.8140 | 0.8666 | 0.5398 | 0.8948 | 0.8678 | 0.7936 | 0.8690 | 0.6801 | 0.7609 | 0.4255 | 0.7808 | 0.7206 | 0.7276 | | 1.1364 | 14.92 | 5520 | 0.6663 | 0.7064 | 0.8114 | 0.8770 | 0.9361 | 0.8201 | 0.8679 | 0.5215 | 0.8919 | 0.8507 | 0.7917 | 0.8670 | 0.6756 | 0.7590 | 0.4192 | 0.7799 | 0.7176 | 0.7264 | | 0.1626 | 14.97 | 5540 | 0.6779 | 0.7092 | 0.8143 | 0.8782 | 0.9350 | 0.8199 | 0.8622 | 0.5283 | 0.8937 | 0.8722 | 0.7886 | 0.8688 | 0.6778 | 0.7671 | 0.4223 | 0.7807 | 0.7226 | 0.7254 | | 0.2601 | 15.03 | 5560 | 0.6393 | 0.7047 | 0.8080 | 0.8766 | 0.9358 | 0.8112 | 0.8806 | 0.5048 | 0.8977 | 0.8597 | 0.7664 | 0.8673 | 0.6785 | 0.7661 | 0.4178 | 0.7828 | 0.7132 | 0.7075 | | 0.188 | 15.08 | 5580 | 0.6080 | 0.7148 | 0.8125 | 0.8833 | 0.9362 | 0.8070 | 0.8606 | 0.5002 | 0.9046 | 0.8576 | 0.8209 | 0.8707 | 0.6787 | 0.7652 | 0.4151 | 0.7921 | 0.7264 | 0.7551 | | 0.1492 | 15.14 | 5600 | 0.6940 | 0.7016 | 0.7966 | 0.8765 | 0.9364 | 0.8041 | 0.8777 | 0.4570 | 0.9117 | 0.8137 | 0.7754 | 0.8678 | 0.6812 | 0.7633 | 0.4006 | 0.7813 | 0.7049 | 0.7123 | | 0.1306 | 15.19 | 5620 | 0.7043 | 0.7040 | 0.8105 | 0.8753 | 0.9296 | 0.8196 | 0.8822 | 0.5192 | 0.8983 | 0.8576 | 0.7668 | 0.8675 | 0.6825 | 0.7640 | 0.4176 | 0.7791 | 0.7123 | 0.7048 | | 0.0968 | 15.24 | 5640 | 0.7197 | 0.6986 | 0.8082 | 0.8724 | 0.9329 | 0.8196 | 0.8689 | 0.5488 | 0.9001 | 0.8462 | 0.7406 | 0.8669 | 0.6779 | 0.7663 | 0.4198 | 0.7789 | 0.6983 | 0.6819 | | 0.1836 | 15.3 | 5660 | 0.7781 | 0.7031 | 0.8063 | 0.8744 | 0.9356 | 0.8022 | 0.8591 | 0.5408 | 0.9025 | 0.8474 | 0.7565 | 0.8657 | 0.6758 | 0.7670 | 0.4257 | 0.7782 | 0.7116 | 0.6975 | | 0.0926 | 15.35 | 5680 | 0.7479 | 0.7083 | 0.8136 | 0.8773 | 0.9327 | 0.7990 | 0.8648 | 0.5458 | 0.8933 | 0.8697 | 0.7897 | 0.8696 | 0.6749 | 0.7679 | 0.4261 | 0.7771 | 0.7169 | 0.7259 | | 0.1173 | 15.41 | 5700 | 0.7123 | 0.7082 | 0.8120 | 0.8772 | 0.9352 | 0.8126 | 0.8648 | 0.5350 | 0.8968 | 0.8624 | 0.7771 | 0.8663 | 0.6769 | 0.7659 | 0.4329 | 0.7819 | 0.7198 | 0.7139 | | 0.1977 | 15.46 | 5720 | 0.6526 | 0.7084 | 0.8191 | 0.8770 | 0.9274 | 0.8162 | 0.8681 | 0.5519 | 0.8888 | 0.8902 | 0.7913 | 0.8688 | 0.6758 | 0.7655 | 0.4325 | 0.7797 | 0.7113 | 0.7254 | | 0.096 | 15.51 | 5740 | 0.6237 | 0.7109 | 0.8066 | 0.8798 | 0.9396 | 0.7830 | 0.8635 | 0.4962 | 0.8994 | 0.8761 | 0.7881 | 0.8659 | 0.6775 | 0.7693 | 0.4289 | 0.7880 | 0.7227 | 0.7240 | | 0.1514 | 15.57 | 5760 | 0.6790 | 0.7061 | 0.8115 | 0.8763 | 0.9263 | 0.8342 | 0.8704 | 0.5164 | 0.9070 | 0.8550 | 0.7710 | 0.8656 | 0.6743 | 0.7730 | 0.4308 | 0.7834 | 0.7113 | 0.7043 | | 0.0453 | 15.62 | 5780 | 0.6741 | 0.7045 | 0.8091 | 0.8758 | 0.9345 | 0.8207 | 0.8796 | 0.5021 | 0.8957 | 0.8724 | 0.7586 | 0.8654 | 0.6798 | 0.7735 | 0.4289 | 0.7846 | 0.7038 | 0.6956 | | 0.1224 | 15.68 | 5800 | 0.7243 | 0.6900 | 0.7975 | 0.8688 | 0.9333 | 0.7996 | 0.8867 | 0.4814 | 0.8940 | 0.8740 | 0.7132 | 0.8647 | 0.6823 | 0.7650 | 0.4163 | 0.7812 | 0.6668 | 0.6540 | | 0.0927 | 15.73 | 5820 | 0.7237 | 0.6903 | 0.7969 | 0.8689 | 0.9342 | 0.8029 | 0.8813 | 0.4867 | 0.8985 | 0.8654 | 0.7091 | 0.8633 | 0.6819 | 0.7656 | 0.4204 | 0.7846 | 0.6672 | 0.6490 | | 0.1154 | 15.78 | 5840 | 0.6878 | 0.6922 | 0.8016 | 0.8690 | 0.9342 | 0.7988 | 0.8671 | 0.5313 | 0.8968 | 0.8735 | 0.7098 | 0.8639 | 0.6824 | 0.7651 | 0.4400 | 0.7879 | 0.6582 | 0.6482 | | 0.1408 | 15.84 | 5860 | 0.6410 | 0.6894 | 0.8008 | 0.8677 | 0.9385 | 0.7996 | 0.8818 | 0.5320 | 0.8902 | 0.8676 | 0.6960 | 0.8616 | 0.6827 | 0.7591 | 0.4409 | 0.7905 | 0.6497 | 0.6416 | | 0.1614 | 15.89 | 5880 | 0.6993 | 0.6902 | 0.7988 | 0.8685 | 0.9335 | 0.8126 | 0.8511 | 0.5153 | 0.9032 | 0.8700 | 0.7061 | 0.8627 | 0.6823 | 0.7575 | 0.4358 | 0.7892 | 0.6553 | 0.6485 | | 0.2869 | 15.95 | 5900 | 0.7689 | 0.6905 | 0.8024 | 0.8678 | 0.9301 | 0.8079 | 0.8756 | 0.5322 | 0.8979 | 0.8709 | 0.7022 | 0.8620 | 0.6833 | 0.7620 | 0.4354 | 0.7858 | 0.6595 | 0.6458 | | 0.139 | 16.0 | 5920 | 0.7812 | 0.6913 | 0.7976 | 0.8688 | 0.9289 | 0.7985 | 0.8763 | 0.4996 | 0.9057 | 0.8696 | 0.7049 | 0.8636 | 0.6856 | 0.7627 | 0.4295 | 0.7846 | 0.6648 | 0.6482 | | 0.0939 | 16.05 | 5940 | 0.7038 | 0.6927 | 0.8014 | 0.8701 | 0.9300 | 0.8017 | 0.8775 | 0.5124 | 0.9019 | 0.8705 | 0.7161 | 0.8671 | 0.6822 | 0.7667 | 0.4266 | 0.7854 | 0.6648 | 0.6564 | | 0.1643 | 16.11 | 5960 | 0.7743 | 0.6922 | 0.8015 | 0.8688 | 0.9334 | 0.8147 | 0.8787 | 0.5176 | 0.8993 | 0.8662 | 0.7003 | 0.8635 | 0.6791 | 0.7618 | 0.4341 | 0.7815 | 0.6791 | 0.6466 | | 0.1276 | 16.16 | 5980 | 0.7730 | 0.7013 | 0.8108 | 0.8735 | 0.9302 | 0.8330 | 0.8632 | 0.5387 | 0.9018 | 0.8648 | 0.7436 | 0.8659 | 0.6750 | 0.7678 | 0.4333 | 0.7818 | 0.7032 | 0.6824 | | 0.5234 | 16.22 | 6000 | 0.7781 | 0.7015 | 0.8034 | 0.8743 | 0.9338 | 0.7904 | 0.8730 | 0.4974 | 0.8964 | 0.8727 | 0.7602 | 0.8650 | 0.6780 | 0.7626 | 0.4227 | 0.7802 | 0.7066 | 0.6952 | | 0.153 | 16.27 | 6020 | 0.7155 | 0.7071 | 0.8094 | 0.8771 | 0.9313 | 0.8078 | 0.8710 | 0.5146 | 0.9031 | 0.8660 | 0.7716 | 0.8679 | 0.6791 | 0.7667 | 0.4346 | 0.7844 | 0.7103 | 0.7067 | | 0.0918 | 16.32 | 6040 | 0.7164 | 0.7049 | 0.8093 | 0.8758 | 0.9283 | 0.8091 | 0.8751 | 0.5270 | 0.9064 | 0.8570 | 0.7621 | 0.8673 | 0.6792 | 0.7667 | 0.4297 | 0.7834 | 0.7079 | 0.6999 | | 0.0636 | 16.38 | 6060 | 0.8310 | 0.6895 | 0.7941 | 0.8673 | 0.9382 | 0.7911 | 0.8637 | 0.5198 | 0.9035 | 0.8511 | 0.6912 | 0.8553 | 0.6711 | 0.7610 | 0.4239 | 0.7807 | 0.6951 | 0.6397 | | 0.15 | 16.43 | 6080 | 0.6919 | 0.7024 | 0.8098 | 0.8751 | 0.9349 | 0.8071 | 0.8785 | 0.5362 | 0.8946 | 0.8544 | 0.7629 | 0.8661 | 0.6761 | 0.7635 | 0.4254 | 0.7848 | 0.6994 | 0.7014 | | 0.1397 | 16.49 | 6100 | 0.7529 | 0.6905 | 0.7951 | 0.8702 | 0.9370 | 0.7902 | 0.8655 | 0.5048 | 0.9066 | 0.8432 | 0.7181 | 0.8666 | 0.6776 | 0.7660 | 0.4135 | 0.7859 | 0.6617 | 0.6622 | | 0.0939 | 16.54 | 6120 | 0.8069 | 0.6875 | 0.7908 | 0.8694 | 0.9296 | 0.7984 | 0.8659 | 0.4716 | 0.9179 | 0.8401 | 0.7122 | 0.8665 | 0.6795 | 0.7671 | 0.4010 | 0.7860 | 0.6569 | 0.6553 | | 0.6179 | 16.59 | 6140 | 0.7314 | 0.6876 | 0.7951 | 0.8700 | 0.9324 | 0.8198 | 0.8660 | 0.4583 | 0.9070 | 0.8694 | 0.7130 | 0.8660 | 0.6785 | 0.7674 | 0.4001 | 0.7918 | 0.6544 | 0.6551 | | 0.1015 | 16.65 | 6160 | 0.7299 | 0.6887 | 0.8009 | 0.8696 | 0.9314 | 0.8189 | 0.8649 | 0.4977 | 0.8981 | 0.8685 | 0.7263 | 0.8675 | 0.6794 | 0.7623 | 0.4013 | 0.7864 | 0.6576 | 0.6667 | | 0.0863 | 16.7 | 6180 | 0.6746 | 0.6980 | 0.8068 | 0.8737 | 0.9299 | 0.8224 | 0.8665 | 0.5073 | 0.9009 | 0.8643 | 0.7562 | 0.8688 | 0.6769 | 0.7616 | 0.4054 | 0.7817 | 0.6951 | 0.6965 | | 0.1738 | 16.76 | 6200 | 0.6060 | 0.7123 | 0.8164 | 0.8813 | 0.9310 | 0.8146 | 0.8844 | 0.5084 | 0.8948 | 0.8641 | 0.8174 | 0.8694 | 0.6797 | 0.7624 | 0.4084 | 0.7880 | 0.7294 | 0.7487 | | 0.2009 | 16.81 | 6220 | 0.6513 | 0.7078 | 0.8044 | 0.8800 | 0.9336 | 0.8120 | 0.8611 | 0.4601 | 0.9109 | 0.8631 | 0.7903 | 0.8692 | 0.6798 | 0.7673 | 0.4001 | 0.7875 | 0.7233 | 0.7272 | | 2.6765 | 16.86 | 6240 | 0.7115 | 0.7013 | 0.7960 | 0.8786 | 0.9366 | 0.7898 | 0.8870 | 0.4108 | 0.9047 | 0.8551 | 0.7879 | 0.8672 | 0.6791 | 0.7635 | 0.3691 | 0.7866 | 0.7177 | 0.7257 | | 0.0662 | 16.92 | 6260 | 0.6028 | 0.7107 | 0.8123 | 0.8808 | 0.9364 | 0.8211 | 0.8591 | 0.5004 | 0.8994 | 0.8649 | 0.8050 | 0.8684 | 0.6741 | 0.7666 | 0.4136 | 0.7891 | 0.7246 | 0.7387 | | 0.1372 | 16.97 | 6280 | 0.6318 | 0.7080 | 0.8142 | 0.8778 | 0.9304 | 0.8114 | 0.8771 | 0.5371 | 0.8978 | 0.8582 | 0.7878 | 0.8656 | 0.6774 | 0.7649 | 0.4222 | 0.7862 | 0.7159 | 0.7238 | | 0.0964 | 17.03 | 6300 | 0.6379 | 0.7076 | 0.8136 | 0.8780 | 0.9301 | 0.8130 | 0.8666 | 0.5375 | 0.9036 | 0.8663 | 0.7778 | 0.8669 | 0.6728 | 0.7716 | 0.4315 | 0.7900 | 0.7053 | 0.7149 | | 0.1279 | 17.08 | 6320 | 0.6366 | 0.7197 | 0.8239 | 0.8860 | 0.9330 | 0.7997 | 0.8757 | 0.5572 | 0.8977 | 0.8582 | 0.8459 | 0.8667 | 0.6734 | 0.7659 | 0.4390 | 0.8092 | 0.7100 | 0.7736 | | 0.0769 | 17.14 | 6340 | 0.6694 | 0.7059 | 0.8111 | 0.8761 | 0.9289 | 0.8014 | 0.8739 | 0.5607 | 0.9071 | 0.8316 | 0.7741 | 0.8661 | 0.6741 | 0.7654 | 0.4358 | 0.7845 | 0.7054 | 0.7097 | | 0.2735 | 17.19 | 6360 | 0.6678 | 0.7025 | 0.8049 | 0.8746 | 0.9355 | 0.7992 | 0.8800 | 0.5124 | 0.9002 | 0.8593 | 0.7478 | 0.8642 | 0.6781 | 0.7625 | 0.4280 | 0.7832 | 0.7133 | 0.6881 | | 1.1427 | 17.24 | 6380 | 0.7853 | 0.6887 | 0.7947 | 0.8684 | 0.9392 | 0.7967 | 0.8571 | 0.5001 | 0.9002 | 0.8685 | 0.7011 | 0.8631 | 0.6789 | 0.7582 | 0.4184 | 0.7845 | 0.6719 | 0.6461 | | 0.1354 | 17.3 | 6400 | 0.7422 | 0.6895 | 0.8006 | 0.8688 | 0.9318 | 0.7965 | 0.8712 | 0.5213 | 0.8984 | 0.8735 | 0.7115 | 0.8640 | 0.6803 | 0.7600 | 0.4258 | 0.7916 | 0.6523 | 0.6523 | | 0.077 | 17.35 | 6420 | 0.7529 | 0.6904 | 0.8037 | 0.8689 | 0.9321 | 0.8140 | 0.8624 | 0.5433 | 0.8997 | 0.8618 | 0.7123 | 0.8635 | 0.6777 | 0.7629 | 0.4356 | 0.7927 | 0.6489 | 0.6514 | | 0.1084 | 17.41 | 6440 | 0.7545 | 0.6923 | 0.8042 | 0.8697 | 0.9293 | 0.8130 | 0.8644 | 0.5499 | 0.9059 | 0.8473 | 0.7197 | 0.8643 | 0.6784 | 0.7672 | 0.4362 | 0.7916 | 0.6503 | 0.6579 | | 0.2807 | 17.46 | 6460 | 0.7531 | 0.6949 | 0.8013 | 0.8712 | 0.9289 | 0.7926 | 0.8760 | 0.5274 | 0.9083 | 0.8466 | 0.7293 | 0.8654 | 0.6809 | 0.7682 | 0.4268 | 0.7867 | 0.6682 | 0.6680 | | 0.0654 | 17.51 | 6480 | 0.6718 | 0.7027 | 0.8091 | 0.8760 | 0.9327 | 0.7899 | 0.8828 | 0.5345 | 0.8987 | 0.8595 | 0.7658 | 0.8670 | 0.6807 | 0.7666 | 0.4298 | 0.7942 | 0.6800 | 0.7005 | | 0.1048 | 17.57 | 6500 | 0.6738 | 0.6983 | 0.8118 | 0.8732 | 0.9331 | 0.8010 | 0.8705 | 0.5639 | 0.8910 | 0.8709 | 0.7522 | 0.8674 | 0.6803 | 0.7619 | 0.4304 | 0.7917 | 0.6662 | 0.6904 | | 0.381 | 17.62 | 6520 | 0.7038 | 0.6981 | 0.8061 | 0.8734 | 0.9287 | 0.7951 | 0.8677 | 0.5456 | 0.9080 | 0.8519 | 0.7457 | 0.8688 | 0.6800 | 0.7643 | 0.4347 | 0.7919 | 0.6650 | 0.6817 | | 0.1314 | 17.68 | 6540 | 0.6728 | 0.6980 | 0.8106 | 0.8735 | 0.9289 | 0.8133 | 0.8745 | 0.5446 | 0.8990 | 0.8648 | 0.7494 | 0.8688 | 0.6785 | 0.7631 | 0.4341 | 0.7936 | 0.6643 | 0.6838 | | 0.1491 | 17.73 | 6560 | 0.6671 | 0.6977 | 0.8085 | 0.8734 | 0.9352 | 0.7997 | 0.8798 | 0.5403 | 0.8912 | 0.8621 | 0.7515 | 0.8674 | 0.6751 | 0.7670 | 0.4256 | 0.7896 | 0.6707 | 0.6883 | | 0.1503 | 17.78 | 6580 | 0.6852 | 0.7015 | 0.8107 | 0.8753 | 0.9295 | 0.8117 | 0.8676 | 0.5280 | 0.8995 | 0.8754 | 0.7631 | 0.8684 | 0.6735 | 0.7691 | 0.4218 | 0.7877 | 0.6913 | 0.6990 | | 0.1663 | 17.84 | 6600 | 0.7299 | 0.6919 | 0.7991 | 0.8716 | 0.9317 | 0.8027 | 0.8827 | 0.4864 | 0.9037 | 0.8546 | 0.7319 | 0.8664 | 0.6721 | 0.7591 | 0.4080 | 0.7874 | 0.6781 | 0.6723 | | 0.3482 | 17.89 | 6620 | 0.7122 | 0.6898 | 0.8034 | 0.8694 | 0.9314 | 0.8109 | 0.8731 | 0.5298 | 0.8994 | 0.8683 | 0.7107 | 0.8663 | 0.6715 | 0.7638 | 0.4228 | 0.7885 | 0.6630 | 0.6530 | | 0.1922 | 17.95 | 6640 | 0.6779 | 0.7056 | 0.8092 | 0.8776 | 0.9321 | 0.8065 | 0.8784 | 0.5085 | 0.9022 | 0.8615 | 0.7749 | 0.8682 | 0.6726 | 0.7645 | 0.4258 | 0.7881 | 0.7115 | 0.7086 | | 0.1884 | 18.0 | 6660 | 0.6645 | 0.7198 | 0.8220 | 0.8866 | 0.9359 | 0.8106 | 0.8779 | 0.5209 | 0.8947 | 0.8665 | 0.8474 | 0.8658 | 0.6723 | 0.7630 | 0.4309 | 0.8081 | 0.7190 | 0.7798 | | 0.1226 | 18.05 | 6680 | 0.6551 | 0.7183 | 0.8152 | 0.8869 | 0.9319 | 0.7956 | 0.8877 | 0.4796 | 0.9040 | 0.8558 | 0.8520 | 0.8658 | 0.6740 | 0.7626 | 0.4160 | 0.8080 | 0.7200 | 0.7819 | | 0.0977 | 18.11 | 6700 | 0.6134 | 0.7066 | 0.8160 | 0.8774 | 0.9320 | 0.8233 | 0.8675 | 0.5421 | 0.8964 | 0.8781 | 0.7726 | 0.8663 | 0.6676 | 0.7659 | 0.4381 | 0.7908 | 0.7092 | 0.7088 | | 0.2765 | 18.16 | 6720 | 0.6325 | 0.7017 | 0.8073 | 0.8742 | 0.9319 | 0.8028 | 0.8757 | 0.5204 | 0.8995 | 0.8725 | 0.7479 | 0.8629 | 0.6725 | 0.7594 | 0.4320 | 0.7855 | 0.7126 | 0.6869 | | 0.1163 | 18.22 | 6740 | 0.6932 | 0.6974 | 0.8006 | 0.8713 | 0.9360 | 0.7784 | 0.8745 | 0.5415 | 0.9007 | 0.8373 | 0.7357 | 0.8616 | 0.6728 | 0.7599 | 0.4262 | 0.7782 | 0.7096 | 0.6736 | | 0.1935 | 18.27 | 6760 | 0.6326 | 0.7070 | 0.8147 | 0.8771 | 0.9289 | 0.8066 | 0.8637 | 0.5510 | 0.8988 | 0.8701 | 0.7836 | 0.8613 | 0.6730 | 0.7613 | 0.4291 | 0.7910 | 0.7158 | 0.7178 | | 0.1554 | 18.32 | 6780 | 0.6887 | 0.6979 | 0.8034 | 0.8722 | 0.9319 | 0.8002 | 0.8747 | 0.5152 | 0.9026 | 0.8717 | 0.7276 | 0.8616 | 0.6731 | 0.7622 | 0.4303 | 0.7850 | 0.7034 | 0.6696 | | 0.2316 | 18.38 | 6800 | 0.6220 | 0.7078 | 0.8151 | 0.8768 | 0.9300 | 0.8078 | 0.8783 | 0.5549 | 0.8989 | 0.8694 | 0.7664 | 0.8657 | 0.6770 | 0.7652 | 0.4435 | 0.7876 | 0.7139 | 0.7019 | | 0.1733 | 18.43 | 6820 | 0.6711 | 0.7142 | 0.8159 | 0.8810 | 0.9347 | 0.8079 | 0.8685 | 0.5294 | 0.8971 | 0.8690 | 0.8044 | 0.8676 | 0.6785 | 0.7619 | 0.4403 | 0.7905 | 0.7253 | 0.7351 | | 0.1224 | 18.49 | 6840 | 0.6410 | 0.7059 | 0.8082 | 0.8768 | 0.9400 | 0.7931 | 0.8775 | 0.5145 | 0.8898 | 0.8697 | 0.7730 | 0.8596 | 0.6733 | 0.7573 | 0.4333 | 0.7906 | 0.7160 | 0.7113 | | 0.1923 | 18.54 | 6860 | 0.6620 | 0.7044 | 0.8150 | 0.8766 | 0.9239 | 0.8152 | 0.8822 | 0.5231 | 0.8951 | 0.8793 | 0.7865 | 0.8678 | 0.6766 | 0.7616 | 0.4224 | 0.7881 | 0.6969 | 0.7173 | | 0.1202 | 18.59 | 6880 | 0.7112 | 0.7034 | 0.8076 | 0.8763 | 0.9339 | 0.7943 | 0.8704 | 0.5197 | 0.8999 | 0.8672 | 0.7676 | 0.8687 | 0.6784 | 0.7652 | 0.4216 | 0.7872 | 0.6982 | 0.7043 | | 0.1458 | 18.65 | 6900 | 0.6784 | 0.7092 | 0.8119 | 0.8786 | 0.9271 | 0.7945 | 0.8805 | 0.5373 | 0.9076 | 0.8470 | 0.7893 | 0.8697 | 0.6761 | 0.7665 | 0.4236 | 0.7841 | 0.7202 | 0.7240 | | 0.081 | 18.7 | 6920 | 0.6600 | 0.7079 | 0.8181 | 0.8779 | 0.9298 | 0.8092 | 0.8810 | 0.5475 | 0.8891 | 0.8767 | 0.7935 | 0.8696 | 0.6776 | 0.7622 | 0.4296 | 0.7868 | 0.7054 | 0.7242 | | 0.0973 | 18.76 | 6940 | 0.7119 | 0.7080 | 0.8144 | 0.8779 | 0.9292 | 0.8136 | 0.8725 | 0.5428 | 0.9021 | 0.8567 | 0.7840 | 0.8687 | 0.6758 | 0.7623 | 0.4331 | 0.7867 | 0.7101 | 0.7194 | | 0.1824 | 18.81 | 6960 | 0.6751 | 0.6980 | 0.8063 | 0.8733 | 0.9315 | 0.7980 | 0.8794 | 0.5285 | 0.9009 | 0.8668 | 0.7388 | 0.8677 | 0.6747 | 0.7631 | 0.4375 | 0.7902 | 0.6740 | 0.6789 | | 0.0786 | 18.86 | 6980 | 0.7423 | 0.6991 | 0.8076 | 0.8737 | 0.9306 | 0.8071 | 0.8693 | 0.5382 | 0.9039 | 0.8581 | 0.7459 | 0.8679 | 0.6727 | 0.7617 | 0.4393 | 0.7888 | 0.6784 | 0.6849 | | 0.1518 | 18.92 | 7000 | 0.7237 | 0.6994 | 0.8069 | 0.8740 | 0.9324 | 0.8023 | 0.8753 | 0.5306 | 0.9024 | 0.8620 | 0.7431 | 0.8668 | 0.6725 | 0.7641 | 0.4475 | 0.7942 | 0.6693 | 0.6812 | | 0.1411 | 18.97 | 7020 | 0.7966 | 0.6968 | 0.8090 | 0.8717 | 0.9326 | 0.7977 | 0.8739 | 0.5625 | 0.8947 | 0.8694 | 0.7322 | 0.8662 | 0.6712 | 0.7617 | 0.4444 | 0.7873 | 0.6734 | 0.6735 | | 0.1319 | 19.03 | 7040 | 0.7241 | 0.6968 | 0.8087 | 0.8718 | 0.9274 | 0.8138 | 0.8690 | 0.5488 | 0.9029 | 0.8643 | 0.7344 | 0.8664 | 0.6683 | 0.7650 | 0.4403 | 0.7857 | 0.6774 | 0.6747 | | 0.1517 | 19.08 | 7060 | 0.7034 | 0.6989 | 0.8102 | 0.8732 | 0.9294 | 0.7966 | 0.8792 | 0.5567 | 0.8970 | 0.8627 | 0.7498 | 0.8672 | 0.6726 | 0.7639 | 0.4376 | 0.7882 | 0.6766 | 0.6861 | | 0.1236 | 19.14 | 7080 | 0.7254 | 0.6991 | 0.8084 | 0.8739 | 0.9297 | 0.8008 | 0.8771 | 0.5359 | 0.9011 | 0.8675 | 0.7469 | 0.8683 | 0.6720 | 0.7652 | 0.4412 | 0.7914 | 0.6715 | 0.6842 | | 0.1812 | 19.19 | 7100 | 0.7489 | 0.6955 | 0.8048 | 0.8719 | 0.9290 | 0.7970 | 0.8700 | 0.5319 | 0.9039 | 0.8690 | 0.7328 | 0.8667 | 0.6728 | 0.7615 | 0.4412 | 0.7917 | 0.6621 | 0.6726 | | 0.1011 | 19.24 | 7120 | 0.7168 | 0.6946 | 0.8074 | 0.8715 | 0.9343 | 0.8158 | 0.8696 | 0.5449 | 0.8974 | 0.8649 | 0.7248 | 0.8652 | 0.6718 | 0.7659 | 0.4412 | 0.7946 | 0.6574 | 0.6660 | | 0.1236 | 19.3 | 7140 | 0.7489 | 0.6930 | 0.8047 | 0.8707 | 0.9311 | 0.8141 | 0.8826 | 0.5385 | 0.9044 | 0.8466 | 0.7155 | 0.8652 | 0.6714 | 0.7678 | 0.4405 | 0.7930 | 0.6560 | 0.6573 | | 0.087 | 19.35 | 7160 | 0.7990 | 0.6947 | 0.8051 | 0.8714 | 0.9292 | 0.8114 | 0.8772 | 0.5299 | 0.9048 | 0.8617 | 0.7217 | 0.8665 | 0.6744 | 0.7675 | 0.4385 | 0.7908 | 0.6620 | 0.6632 | | 0.0746 | 19.41 | 7180 | 0.7367 | 0.6960 | 0.8054 | 0.8719 | 0.9284 | 0.8010 | 0.8743 | 0.5300 | 0.9044 | 0.8740 | 0.7257 | 0.8673 | 0.6767 | 0.7663 | 0.4423 | 0.7916 | 0.6616 | 0.6662 | | 0.1389 | 19.46 | 7200 | 0.8094 | 0.6963 | 0.8058 | 0.8721 | 0.9297 | 0.8056 | 0.8665 | 0.5318 | 0.9038 | 0.8733 | 0.7296 | 0.8664 | 0.6738 | 0.7654 | 0.4449 | 0.7929 | 0.6611 | 0.6693 | | 0.1128 | 19.51 | 7220 | 0.7732 | 0.6950 | 0.8047 | 0.8718 | 0.9341 | 0.8098 | 0.8604 | 0.5435 | 0.9039 | 0.8530 | 0.7285 | 0.8648 | 0.6725 | 0.7671 | 0.4374 | 0.7931 | 0.6613 | 0.6688 | | 0.0658 | 19.57 | 7240 | 0.7832 | 0.6946 | 0.8020 | 0.8713 | 0.9326 | 0.7877 | 0.8758 | 0.5404 | 0.9049 | 0.8448 | 0.7278 | 0.8652 | 0.6743 | 0.7660 | 0.4331 | 0.7886 | 0.6652 | 0.6697 | | 0.1991 | 19.62 | 7260 | 0.7704 | 0.6952 | 0.8050 | 0.8718 | 0.9329 | 0.8080 | 0.8705 | 0.5356 | 0.9022 | 0.8591 | 0.7266 | 0.8655 | 0.6748 | 0.7656 | 0.4362 | 0.7917 | 0.6646 | 0.6680 | | 0.0798 | 19.68 | 7280 | 0.7074 | 0.6966 | 0.8108 | 0.8723 | 0.9286 | 0.8124 | 0.8734 | 0.5565 | 0.8983 | 0.8681 | 0.7383 | 0.8671 | 0.6754 | 0.7664 | 0.4350 | 0.7925 | 0.6647 | 0.6748 | | 0.2001 | 19.73 | 7300 | 0.6872 | 0.7027 | 0.8135 | 0.8747 | 0.9288 | 0.8011 | 0.8833 | 0.5605 | 0.8961 | 0.8659 | 0.7586 | 0.8658 | 0.6776 | 0.7633 | 0.4302 | 0.7871 | 0.6996 | 0.6953 | | 0.1103 | 19.78 | 7320 | 0.7072 | 0.7050 | 0.8092 | 0.8756 | 0.9345 | 0.7969 | 0.8615 | 0.5538 | 0.9030 | 0.8568 | 0.7582 | 0.8635 | 0.6759 | 0.7655 | 0.4312 | 0.7858 | 0.7144 | 0.6984 | | 1.6601 | 19.84 | 7340 | 0.7160 | 0.7033 | 0.8113 | 0.8748 | 0.9334 | 0.7963 | 0.8750 | 0.5583 | 0.8968 | 0.8674 | 0.7517 | 0.8640 | 0.6758 | 0.7652 | 0.4315 | 0.7862 | 0.7071 | 0.6933 | | 1.7574 | 19.89 | 7360 | 0.7489 | 0.6998 | 0.8093 | 0.8728 | 0.9346 | 0.8027 | 0.8794 | 0.5533 | 0.8950 | 0.8676 | 0.7327 | 0.8615 | 0.6738 | 0.7658 | 0.4333 | 0.7860 | 0.7013 | 0.6772 | | 0.09 | 19.95 | 7380 | 0.6577 | 0.7101 | 0.8147 | 0.8790 | 0.9325 | 0.7968 | 0.8719 | 0.5489 | 0.9002 | 0.8700 | 0.7826 | 0.8670 | 0.6763 | 0.7667 | 0.4332 | 0.7904 | 0.7189 | 0.7181 | | 0.2354 | 20.0 | 7400 | 0.6462 | 0.7126 | 0.8219 | 0.8804 | 0.9318 | 0.8027 | 0.8722 | 0.5738 | 0.8921 | 0.8781 | 0.8026 | 0.8668 | 0.6768 | 0.7677 | 0.4277 | 0.7936 | 0.7176 | 0.7380 | | 0.1215 | 20.05 | 7420 | 0.6418 | 0.7203 | 0.8229 | 0.8860 | 0.9304 | 0.7973 | 0.8817 | 0.5427 | 0.8999 | 0.8656 | 0.8428 | 0.8695 | 0.6778 | 0.7662 | 0.4271 | 0.8020 | 0.7261 | 0.7731 | | 0.1326 | 20.11 | 7440 | 0.6310 | 0.7183 | 0.8203 | 0.8844 | 0.9333 | 0.8062 | 0.8694 | 0.5379 | 0.9006 | 0.8700 | 0.8247 | 0.8687 | 0.6762 | 0.7660 | 0.4323 | 0.7982 | 0.7285 | 0.7584 | | 0.1656 | 20.16 | 7460 | 0.6689 | 0.7193 | 0.8195 | 0.8854 | 0.9294 | 0.8052 | 0.8677 | 0.5223 | 0.9054 | 0.8690 | 0.8375 | 0.8701 | 0.6762 | 0.7636 | 0.4282 | 0.7979 | 0.7330 | 0.7664 | | 0.0915 | 20.22 | 7480 | 0.6288 | 0.7211 | 0.8199 | 0.8874 | 0.9298 | 0.8059 | 0.8792 | 0.5091 | 0.9073 | 0.8569 | 0.8508 | 0.8688 | 0.6745 | 0.7635 | 0.4236 | 0.8049 | 0.7304 | 0.7820 | | 0.0714 | 20.27 | 7500 | 0.6200 | 0.7269 | 0.8256 | 0.8907 | 0.9311 | 0.8028 | 0.8822 | 0.5254 | 0.9037 | 0.8576 | 0.8768 | 0.8696 | 0.6752 | 0.7658 | 0.4294 | 0.8114 | 0.7320 | 0.8046 | | 0.2675 | 20.32 | 7520 | 0.6206 | 0.7266 | 0.8236 | 0.8909 | 0.9326 | 0.7991 | 0.8770 | 0.5145 | 0.9055 | 0.8624 | 0.8741 | 0.8695 | 0.6746 | 0.7692 | 0.4282 | 0.8128 | 0.7305 | 0.8017 | | 0.1737 | 20.38 | 7540 | 0.6122 | 0.7266 | 0.8265 | 0.8907 | 0.9259 | 0.8079 | 0.8838 | 0.5225 | 0.9088 | 0.8621 | 0.8748 | 0.8696 | 0.6746 | 0.7715 | 0.4238 | 0.8119 | 0.7315 | 0.8033 | | 0.0829 | 20.43 | 7560 | 0.6066 | 0.7261 | 0.8264 | 0.8910 | 0.9338 | 0.8110 | 0.8824 | 0.5125 | 0.8984 | 0.8716 | 0.8754 | 0.8705 | 0.6742 | 0.7717 | 0.4179 | 0.8123 | 0.7317 | 0.8042 | | 0.0829 | 20.49 | 7580 | 0.6349 | 0.7136 | 0.8165 | 0.8821 | 0.9343 | 0.7894 | 0.8839 | 0.5315 | 0.8964 | 0.8745 | 0.8055 | 0.8700 | 0.6760 | 0.7734 | 0.4188 | 0.7929 | 0.7238 | 0.7403 | | 0.0523 | 20.54 | 7600 | 0.7175 | 0.7086 | 0.8114 | 0.8784 | 0.9310 | 0.8067 | 0.8763 | 0.5279 | 0.9061 | 0.8521 | 0.7794 | 0.8692 | 0.6779 | 0.7724 | 0.4220 | 0.7854 | 0.7187 | 0.7150 | | 0.0771 | 20.59 | 7620 | 0.6773 | 0.7081 | 0.8122 | 0.8785 | 0.9365 | 0.8072 | 0.8763 | 0.5319 | 0.8996 | 0.8628 | 0.7709 | 0.8679 | 0.6774 | 0.7774 | 0.4234 | 0.7896 | 0.7127 | 0.7081 | | 0.4593 | 20.65 | 7640 | 0.7130 | 0.7001 | 0.8051 | 0.8751 | 0.9288 | 0.8155 | 0.8759 | 0.4955 | 0.9106 | 0.8654 | 0.7442 | 0.8692 | 0.6768 | 0.7745 | 0.4197 | 0.7890 | 0.6900 | 0.6817 | | 1.3661 | 20.7 | 7660 | 0.6997 | 0.6960 | 0.8050 | 0.8721 | 0.9350 | 0.8002 | 0.8816 | 0.5299 | 0.8996 | 0.8712 | 0.7173 | 0.8675 | 0.6771 | 0.7749 | 0.4303 | 0.7900 | 0.6738 | 0.6581 | | 0.1361 | 20.76 | 7680 | 0.7710 | 0.6942 | 0.8043 | 0.8712 | 0.9334 | 0.8141 | 0.8666 | 0.5209 | 0.9003 | 0.8784 | 0.7168 | 0.8663 | 0.6784 | 0.7681 | 0.4296 | 0.7900 | 0.6704 | 0.6570 | | 0.1341 | 20.81 | 7700 | 0.7353 | 0.6939 | 0.8009 | 0.8717 | 0.9340 | 0.8127 | 0.8658 | 0.4955 | 0.9043 | 0.8765 | 0.7171 | 0.8665 | 0.6786 | 0.7676 | 0.4213 | 0.7895 | 0.6759 | 0.6581 | | 0.1385 | 20.86 | 7720 | 0.6858 | 0.6975 | 0.8045 | 0.8735 | 0.9329 | 0.8072 | 0.8827 | 0.4990 | 0.8995 | 0.8725 | 0.7377 | 0.8672 | 0.6793 | 0.7653 | 0.4218 | 0.7893 | 0.6842 | 0.6757 | | 0.13 | 20.92 | 7740 | 0.7104 | 0.7030 | 0.8086 | 0.8756 | 0.9307 | 0.8123 | 0.8740 | 0.5147 | 0.9034 | 0.8713 | 0.7540 | 0.8664 | 0.6789 | 0.7676 | 0.4285 | 0.7894 | 0.6999 | 0.6901 | | 0.2066 | 20.97 | 7760 | 0.7073 | 0.7038 | 0.8030 | 0.8775 | 0.9329 | 0.8014 | 0.8815 | 0.4723 | 0.9081 | 0.8576 | 0.7674 | 0.8665 | 0.6802 | 0.7683 | 0.4108 | 0.7903 | 0.7081 | 0.7024 | | 0.1249 | 21.03 | 7780 | 0.6910 | 0.7139 | 0.8107 | 0.8829 | 0.9368 | 0.7983 | 0.8835 | 0.4947 | 0.9036 | 0.8429 | 0.8148 | 0.8669 | 0.6799 | 0.7686 | 0.4185 | 0.7969 | 0.7212 | 0.7452 | | 0.1122 | 21.08 | 7800 | 0.6585 | 0.7111 | 0.8104 | 0.8804 | 0.9323 | 0.8005 | 0.8776 | 0.5014 | 0.9058 | 0.8675 | 0.7881 | 0.8700 | 0.6808 | 0.7677 | 0.4258 | 0.7898 | 0.7220 | 0.7219 | | 0.1621 | 21.14 | 7820 | 0.6931 | 0.7090 | 0.8118 | 0.8783 | 0.9329 | 0.8013 | 0.8827 | 0.5265 | 0.9025 | 0.8719 | 0.7647 | 0.8706 | 0.6802 | 0.7684 | 0.4347 | 0.7855 | 0.7195 | 0.7045 | | 0.1049 | 21.19 | 7840 | 0.6546 | 0.7197 | 0.8182 | 0.8851 | 0.9320 | 0.7959 | 0.8877 | 0.5237 | 0.9050 | 0.8595 | 0.8234 | 0.8726 | 0.6800 | 0.7687 | 0.4337 | 0.7956 | 0.7331 | 0.7542 | | 0.1394 | 21.24 | 7860 | 0.7061 | 0.7146 | 0.8144 | 0.8814 | 0.9335 | 0.7983 | 0.8808 | 0.5293 | 0.9036 | 0.8617 | 0.7939 | 0.8714 | 0.6785 | 0.7672 | 0.4368 | 0.7876 | 0.7312 | 0.7299 | | 0.0459 | 21.3 | 7880 | 0.7485 | 0.7120 | 0.8123 | 0.8796 | 0.9317 | 0.7955 | 0.8811 | 0.5299 | 0.9045 | 0.8590 | 0.7845 | 0.8699 | 0.6787 | 0.7655 | 0.4379 | 0.7853 | 0.7264 | 0.7205 | | 0.1347 | 21.35 | 7900 | 0.6853 | 0.7143 | 0.8153 | 0.8816 | 0.9327 | 0.8045 | 0.8828 | 0.5189 | 0.9005 | 0.8653 | 0.8021 | 0.8700 | 0.6786 | 0.7632 | 0.4344 | 0.7897 | 0.7281 | 0.7363 | | 0.2338 | 21.41 | 7920 | 0.6735 | 0.7206 | 0.8208 | 0.8857 | 0.9344 | 0.8137 | 0.8709 | 0.5275 | 0.9011 | 0.8640 | 0.8341 | 0.8700 | 0.6770 | 0.7663 | 0.4361 | 0.7989 | 0.7325 | 0.7635 | | 0.1688 | 21.46 | 7940 | 0.6418 | 0.7210 | 0.8133 | 0.8872 | 0.9371 | 0.7777 | 0.8804 | 0.4946 | 0.9067 | 0.8563 | 0.8403 | 0.8695 | 0.6771 | 0.7692 | 0.4278 | 0.8027 | 0.7297 | 0.7708 | | 0.0947 | 21.51 | 7960 | 0.6161 | 0.7283 | 0.8263 | 0.8911 | 0.9324 | 0.8138 | 0.8808 | 0.5151 | 0.9036 | 0.8694 | 0.8694 | 0.8696 | 0.6808 | 0.7728 | 0.4360 | 0.8136 | 0.7290 | 0.7964 | | 0.1065 | 21.57 | 7980 | 0.6244 | 0.7253 | 0.8246 | 0.8891 | 0.9328 | 0.8065 | 0.8800 | 0.5268 | 0.9026 | 0.8679 | 0.8555 | 0.8686 | 0.6796 | 0.7694 | 0.4365 | 0.8107 | 0.7262 | 0.7859 | | 0.0763 | 21.62 | 8000 | 0.7093 | 0.7007 | 0.8061 | 0.8746 | 0.9326 | 0.8029 | 0.8719 | 0.5272 | 0.9068 | 0.8615 | 0.7399 | 0.8679 | 0.6782 | 0.7684 | 0.4334 | 0.7898 | 0.6870 | 0.6800 | | 0.13 | 21.68 | 8020 | 0.6518 | 0.7108 | 0.8161 | 0.8800 | 0.9349 | 0.8087 | 0.8773 | 0.5368 | 0.8971 | 0.8752 | 0.7830 | 0.8686 | 0.6796 | 0.7707 | 0.4403 | 0.7964 | 0.7034 | 0.7165 | | 0.1545 | 21.73 | 8040 | 0.6582 | 0.7096 | 0.8114 | 0.8790 | 0.9321 | 0.7886 | 0.8799 | 0.5305 | 0.9037 | 0.8671 | 0.7782 | 0.8679 | 0.6783 | 0.7673 | 0.4368 | 0.7918 | 0.7128 | 0.7126 | | 0.1939 | 21.78 | 8060 | 0.6539 | 0.7086 | 0.8130 | 0.8781 | 0.9337 | 0.8064 | 0.8771 | 0.5395 | 0.9017 | 0.8648 | 0.7680 | 0.8671 | 0.6803 | 0.7696 | 0.4366 | 0.7913 | 0.7103 | 0.7050 | | 0.073 | 21.84 | 8080 | 0.7916 | 0.6996 | 0.8091 | 0.8734 | 0.9307 | 0.7970 | 0.8790 | 0.5424 | 0.8986 | 0.8808 | 0.7353 | 0.8675 | 0.6790 | 0.7671 | 0.4382 | 0.7892 | 0.6810 | 0.6756 | | 0.085 | 21.89 | 8100 | 0.7277 | 0.7046 | 0.8115 | 0.8762 | 0.9299 | 0.8081 | 0.8840 | 0.5282 | 0.9001 | 0.8694 | 0.7607 | 0.8679 | 0.6804 | 0.7652 | 0.4367 | 0.7900 | 0.6957 | 0.6961 | | 0.2033 | 21.95 | 8120 | 0.7420 | 0.6993 | 0.8078 | 0.8734 | 0.9324 | 0.7945 | 0.8860 | 0.5394 | 0.8983 | 0.8674 | 0.7368 | 0.8677 | 0.6798 | 0.7670 | 0.4328 | 0.7884 | 0.6832 | 0.6758 | | 0.1325 | 22.0 | 8140 | 0.7304 | 0.6987 | 0.8074 | 0.8734 | 0.9329 | 0.7983 | 0.8827 | 0.5278 | 0.8955 | 0.8729 | 0.7419 | 0.8679 | 0.6799 | 0.7666 | 0.4299 | 0.7885 | 0.6804 | 0.6780 | | 0.0975 | 22.05 | 8160 | 0.7002 | 0.6988 | 0.8116 | 0.8730 | 0.9310 | 0.8059 | 0.8824 | 0.5541 | 0.8942 | 0.8757 | 0.7382 | 0.8684 | 0.6799 | 0.7680 | 0.4371 | 0.7902 | 0.6729 | 0.6754 | | 0.0684 | 22.11 | 8180 | 0.7231 | 0.6985 | 0.8051 | 0.8733 | 0.9328 | 0.7797 | 0.8840 | 0.5428 | 0.9023 | 0.8585 | 0.7358 | 0.8682 | 0.6782 | 0.7660 | 0.4356 | 0.7893 | 0.6779 | 0.6746 | | 0.1832 | 22.16 | 8200 | 0.6781 | 0.6992 | 0.8090 | 0.8739 | 0.9302 | 0.8037 | 0.8828 | 0.5384 | 0.9013 | 0.8647 | 0.7418 | 0.8689 | 0.6809 | 0.7669 | 0.4350 | 0.7930 | 0.6709 | 0.6787 | | 0.1687 | 22.22 | 8220 | 0.7406 | 0.7015 | 0.8115 | 0.8748 | 0.9284 | 0.8067 | 0.8775 | 0.5412 | 0.8996 | 0.8697 | 0.7575 | 0.8689 | 0.6792 | 0.7661 | 0.4299 | 0.7893 | 0.6838 | 0.6930 | | 0.0902 | 22.27 | 8240 | 0.7015 | 0.7096 | 0.8115 | 0.8791 | 0.9286 | 0.8025 | 0.8744 | 0.5226 | 0.9084 | 0.8584 | 0.7857 | 0.8695 | 0.6790 | 0.7681 | 0.4297 | 0.7887 | 0.7137 | 0.7183 | | 0.1202 | 22.32 | 8260 | 0.7481 | 0.7031 | 0.8109 | 0.8756 | 0.9331 | 0.8000 | 0.8738 | 0.5445 | 0.8988 | 0.8712 | 0.7551 | 0.8686 | 0.6789 | 0.7677 | 0.4311 | 0.7892 | 0.6932 | 0.6931 | | 0.6745 | 22.38 | 8280 | 0.7041 | 0.6983 | 0.8084 | 0.8734 | 0.9323 | 0.8016 | 0.8815 | 0.5337 | 0.8974 | 0.8764 | 0.7356 | 0.8690 | 0.6773 | 0.7668 | 0.4315 | 0.7893 | 0.6776 | 0.6765 | | 0.1366 | 22.43 | 8300 | 0.7009 | 0.7008 | 0.8079 | 0.8749 | 0.9292 | 0.7985 | 0.8733 | 0.5333 | 0.9063 | 0.8650 | 0.7496 | 0.8696 | 0.6777 | 0.7679 | 0.4313 | 0.7903 | 0.6831 | 0.6858 | | 0.1055 | 22.49 | 8320 | 0.6737 | 0.7011 | 0.8084 | 0.8751 | 0.9311 | 0.8031 | 0.8851 | 0.5314 | 0.9032 | 0.8519 | 0.7531 | 0.8693 | 0.6787 | 0.7670 | 0.4311 | 0.7911 | 0.6825 | 0.6879 | | 0.1172 | 22.54 | 8340 | 0.7570 | 0.6995 | 0.8085 | 0.8737 | 0.9312 | 0.8105 | 0.8702 | 0.5356 | 0.9011 | 0.8662 | 0.7446 | 0.8686 | 0.6787 | 0.7646 | 0.4297 | 0.7869 | 0.6859 | 0.6823 | | 0.0575 | 22.59 | 8360 | 0.7264 | 0.7014 | 0.8089 | 0.8745 | 0.9325 | 0.8079 | 0.8722 | 0.5356 | 0.9002 | 0.8633 | 0.7506 | 0.8679 | 0.6790 | 0.7652 | 0.4312 | 0.7867 | 0.6940 | 0.6861 | | 0.1153 | 22.65 | 8380 | 0.7527 | 0.6944 | 0.8037 | 0.8709 | 0.9330 | 0.8046 | 0.8730 | 0.5308 | 0.9012 | 0.8669 | 0.7167 | 0.8670 | 0.6785 | 0.7645 | 0.4311 | 0.7863 | 0.6753 | 0.6578 | | 0.1594 | 22.7 | 8400 | 0.7645 | 0.7028 | 0.8075 | 0.8752 | 0.9365 | 0.8001 | 0.8687 | 0.5303 | 0.8990 | 0.8659 | 0.7518 | 0.8655 | 0.6773 | 0.7656 | 0.4298 | 0.7871 | 0.7047 | 0.6897 | | 0.0405 | 22.76 | 8420 | 0.7033 | 0.7057 | 0.8117 | 0.8761 | 0.9295 | 0.8024 | 0.8721 | 0.5400 | 0.9015 | 0.8740 | 0.7625 | 0.8680 | 0.6787 | 0.7657 | 0.4315 | 0.7843 | 0.7127 | 0.6987 | | 1.7524 | 22.81 | 8440 | 0.7290 | 0.7005 | 0.8068 | 0.8731 | 0.9365 | 0.8086 | 0.8835 | 0.5363 | 0.8979 | 0.8512 | 0.7335 | 0.8609 | 0.6775 | 0.7657 | 0.4288 | 0.7854 | 0.7108 | 0.6743 | | 0.0614 | 22.86 | 8460 | 0.7010 | 0.7023 | 0.8095 | 0.8747 | 0.9353 | 0.8113 | 0.8814 | 0.5310 | 0.8956 | 0.8634 | 0.7483 | 0.8641 | 0.6771 | 0.7640 | 0.4267 | 0.7863 | 0.7124 | 0.6857 | | 0.2336 | 22.92 | 8480 | 0.7311 | 0.7009 | 0.8066 | 0.8738 | 0.9347 | 0.8007 | 0.8797 | 0.5287 | 0.8985 | 0.8621 | 0.7420 | 0.8637 | 0.6761 | 0.7632 | 0.4252 | 0.7846 | 0.7136 | 0.6799 | | 0.2301 | 22.97 | 8500 | 0.7148 | 0.7062 | 0.8132 | 0.8767 | 0.9332 | 0.8101 | 0.8695 | 0.5433 | 0.8980 | 0.8743 | 0.7641 | 0.8681 | 0.6786 | 0.7662 | 0.4312 | 0.7866 | 0.7127 | 0.7004 | | 1.6015 | 23.03 | 8520 | 0.7963 | 0.7041 | 0.8103 | 0.8751 | 0.9339 | 0.8077 | 0.8721 | 0.5419 | 0.8994 | 0.8681 | 0.7486 | 0.8656 | 0.6782 | 0.7641 | 0.4308 | 0.7837 | 0.7169 | 0.6894 | | 0.4187 | 23.08 | 8540 | 0.7661 | 0.7053 | 0.8102 | 0.8757 | 0.9312 | 0.8135 | 0.8670 | 0.5308 | 0.9034 | 0.8712 | 0.7545 | 0.8658 | 0.6785 | 0.7645 | 0.4317 | 0.7838 | 0.7189 | 0.6939 | | 0.1823 | 23.14 | 8560 | 0.7773 | 0.6978 | 0.8045 | 0.8713 | 0.9323 | 0.7983 | 0.8697 | 0.5399 | 0.9030 | 0.8697 | 0.7185 | 0.8616 | 0.6736 | 0.7637 | 0.4306 | 0.7817 | 0.7118 | 0.6614 | | 0.1152 | 23.19 | 8580 | 0.7167 | 0.7010 | 0.8101 | 0.8736 | 0.9318 | 0.8108 | 0.8777 | 0.5492 | 0.8997 | 0.8631 | 0.7385 | 0.8625 | 0.6747 | 0.7675 | 0.4297 | 0.7869 | 0.7090 | 0.6770 | | 0.1159 | 23.24 | 8600 | 0.7670 | 0.7014 | 0.8082 | 0.8737 | 0.9340 | 0.8076 | 0.8658 | 0.5403 | 0.8995 | 0.8698 | 0.7407 | 0.8628 | 0.6750 | 0.7647 | 0.4303 | 0.7847 | 0.7130 | 0.6796 | | 0.194 | 23.3 | 8620 | 0.7899 | 0.6987 | 0.7988 | 0.8731 | 0.9357 | 0.7832 | 0.8648 | 0.5035 | 0.9065 | 0.8646 | 0.7331 | 0.8614 | 0.6729 | 0.7627 | 0.4245 | 0.7849 | 0.7113 | 0.6732 | | 0.0968 | 23.35 | 8640 | 0.7002 | 0.7083 | 0.8120 | 0.8785 | 0.9309 | 0.8076 | 0.8844 | 0.5208 | 0.9029 | 0.8603 | 0.7774 | 0.8682 | 0.6787 | 0.7664 | 0.4284 | 0.7894 | 0.7158 | 0.7113 | | 1.4639 | 23.41 | 8660 | 0.7585 | 0.7071 | 0.8116 | 0.8774 | 0.9342 | 0.7974 | 0.8794 | 0.5378 | 0.8991 | 0.8670 | 0.7663 | 0.8678 | 0.6798 | 0.7667 | 0.4287 | 0.7874 | 0.7146 | 0.7046 | | 0.3679 | 23.46 | 8680 | 0.7567 | 0.7070 | 0.8118 | 0.8771 | 0.9337 | 0.8019 | 0.8765 | 0.5422 | 0.9007 | 0.8639 | 0.7639 | 0.8671 | 0.6800 | 0.7672 | 0.4292 | 0.7872 | 0.7161 | 0.7022 | | 0.2112 | 23.51 | 8700 | 0.7798 | 0.7002 | 0.8010 | 0.8733 | 0.9346 | 0.7861 | 0.8768 | 0.5191 | 0.9075 | 0.8516 | 0.7316 | 0.8612 | 0.6765 | 0.7631 | 0.4289 | 0.7846 | 0.7132 | 0.6742 | | 0.1242 | 23.57 | 8720 | 0.8097 | 0.7044 | 0.8100 | 0.8753 | 0.9325 | 0.7957 | 0.8774 | 0.5472 | 0.9005 | 0.8631 | 0.7537 | 0.8661 | 0.6789 | 0.7632 | 0.4299 | 0.7840 | 0.7148 | 0.6939 | | 0.1271 | 23.62 | 8740 | 0.7291 | 0.7089 | 0.8146 | 0.8785 | 0.9324 | 0.8063 | 0.8788 | 0.5404 | 0.8987 | 0.8673 | 0.7783 | 0.8689 | 0.6798 | 0.7681 | 0.4287 | 0.7886 | 0.7144 | 0.7140 | | 0.227 | 23.68 | 8760 | 0.7247 | 0.7100 | 0.8172 | 0.8786 | 0.9300 | 0.8121 | 0.8768 | 0.5467 | 0.8971 | 0.8725 | 0.7851 | 0.8694 | 0.6788 | 0.7661 | 0.4293 | 0.7859 | 0.7208 | 0.7195 | | 0.1366 | 23.73 | 8780 | 0.7366 | 0.7100 | 0.8178 | 0.8783 | 0.9279 | 0.8176 | 0.8717 | 0.5504 | 0.8988 | 0.8737 | 0.7849 | 0.8690 | 0.6777 | 0.7650 | 0.4302 | 0.7840 | 0.7241 | 0.7200 | | 0.0983 | 23.78 | 8800 | 0.6922 | 0.7121 | 0.8167 | 0.8797 | 0.9298 | 0.8021 | 0.8783 | 0.5441 | 0.8989 | 0.8696 | 0.7943 | 0.8703 | 0.6800 | 0.7660 | 0.4312 | 0.7861 | 0.7253 | 0.7257 | | 0.04 | 23.84 | 8820 | 0.7027 | 0.7118 | 0.8164 | 0.8795 | 0.9321 | 0.8111 | 0.8738 | 0.5436 | 0.8987 | 0.8643 | 0.7915 | 0.8690 | 0.6799 | 0.7668 | 0.4308 | 0.7867 | 0.7253 | 0.7241 | | 0.0622 | 23.89 | 8840 | 0.7024 | 0.7100 | 0.8136 | 0.8787 | 0.9332 | 0.8089 | 0.8727 | 0.5359 | 0.9015 | 0.8651 | 0.7779 | 0.8687 | 0.6794 | 0.7667 | 0.4319 | 0.7868 | 0.7215 | 0.7147 | | 0.1341 | 23.95 | 8860 | 0.6925 | 0.7084 | 0.8081 | 0.8790 | 0.9332 | 0.8001 | 0.8796 | 0.4940 | 0.9048 | 0.8682 | 0.7765 | 0.8687 | 0.6787 | 0.7658 | 0.4232 | 0.7883 | 0.7207 | 0.7131 | | 0.1644 | 24.0 | 8880 | 0.7891 | 0.7086 | 0.8102 | 0.8781 | 0.9314 | 0.7996 | 0.8774 | 0.5160 | 0.9025 | 0.8664 | 0.7780 | 0.8682 | 0.6798 | 0.7632 | 0.4267 | 0.7845 | 0.7239 | 0.7137 | | 1.0628 | 24.05 | 8900 | 0.7730 | 0.7101 | 0.8108 | 0.8792 | 0.9332 | 0.8048 | 0.8727 | 0.5139 | 0.9035 | 0.8645 | 0.7829 | 0.8681 | 0.6801 | 0.7653 | 0.4277 | 0.7873 | 0.7252 | 0.7171 | | 0.1389 | 24.11 | 8920 | 0.7636 | 0.7097 | 0.8120 | 0.8790 | 0.9320 | 0.8098 | 0.8714 | 0.5149 | 0.9023 | 0.8706 | 0.7828 | 0.8685 | 0.6799 | 0.7649 | 0.4271 | 0.7870 | 0.7240 | 0.7170 | | 0.0408 | 24.16 | 8940 | 0.7707 | 0.7100 | 0.8134 | 0.8787 | 0.9327 | 0.8057 | 0.8759 | 0.5304 | 0.8987 | 0.8672 | 0.7835 | 0.8681 | 0.6802 | 0.7641 | 0.4277 | 0.7853 | 0.7261 | 0.7186 | | 0.1 | 24.22 | 8960 | 0.7071 | 0.7121 | 0.8173 | 0.8799 | 0.9289 | 0.8129 | 0.8731 | 0.5437 | 0.9021 | 0.8664 | 0.7940 | 0.8697 | 0.6786 | 0.7677 | 0.4310 | 0.7877 | 0.7259 | 0.7245 | | 0.135 | 24.27 | 8980 | 0.7501 | 0.7109 | 0.8137 | 0.8791 | 0.9321 | 0.8073 | 0.8764 | 0.5357 | 0.9032 | 0.8589 | 0.7822 | 0.8686 | 0.6794 | 0.7658 | 0.4296 | 0.7862 | 0.7281 | 0.7187 | | 0.3517 | 24.32 | 9000 | 0.7621 | 0.7114 | 0.8137 | 0.8794 | 0.9322 | 0.8088 | 0.8712 | 0.5408 | 0.9049 | 0.8486 | 0.7894 | 0.8686 | 0.6787 | 0.7670 | 0.4289 | 0.7862 | 0.7267 | 0.7233 | | 0.0275 | 24.38 | 9020 | 0.7665 | 0.7061 | 0.8118 | 0.8769 | 0.9333 | 0.8019 | 0.8751 | 0.5371 | 0.8984 | 0.8716 | 0.7653 | 0.8685 | 0.6792 | 0.7659 | 0.4293 | 0.7863 | 0.7107 | 0.7030 | | 0.1118 | 24.43 | 9040 | 0.7370 | 0.7108 | 0.8117 | 0.8795 | 0.9342 | 0.8043 | 0.8717 | 0.5179 | 0.9016 | 0.8657 | 0.7865 | 0.8678 | 0.6796 | 0.7660 | 0.4288 | 0.7881 | 0.7246 | 0.7205 | | 0.1404 | 24.49 | 9060 | 0.7741 | 0.7094 | 0.8119 | 0.8784 | 0.9324 | 0.7972 | 0.8693 | 0.5348 | 0.9025 | 0.8685 | 0.7786 | 0.8688 | 0.6795 | 0.7652 | 0.4313 | 0.7860 | 0.7210 | 0.7140 | | 0.1383 | 24.54 | 9080 | 0.7696 | 0.7102 | 0.8142 | 0.8786 | 0.9292 | 0.8074 | 0.8720 | 0.5373 | 0.9031 | 0.8665 | 0.7840 | 0.8683 | 0.6793 | 0.7649 | 0.4305 | 0.7856 | 0.7243 | 0.7187 | | 0.0895 | 24.59 | 9100 | 0.7283 | 0.7106 | 0.8164 | 0.8790 | 0.9297 | 0.8120 | 0.8752 | 0.5472 | 0.9017 | 0.8647 | 0.7841 | 0.8695 | 0.6782 | 0.7668 | 0.4318 | 0.7866 | 0.7231 | 0.7185 | | 0.1423 | 24.65 | 9120 | 0.7587 | 0.7101 | 0.8113 | 0.8791 | 0.9336 | 0.8059 | 0.8696 | 0.5177 | 0.9028 | 0.8689 | 0.7809 | 0.8683 | 0.6798 | 0.7654 | 0.4303 | 0.7875 | 0.7226 | 0.7164 | | 0.2422 | 24.7 | 9140 | 0.7617 | 0.7083 | 0.8116 | 0.8777 | 0.9357 | 0.8067 | 0.8668 | 0.5347 | 0.8999 | 0.8668 | 0.7706 | 0.8672 | 0.6792 | 0.7651 | 0.4330 | 0.7865 | 0.7188 | 0.7086 | | 0.2002 | 24.76 | 9160 | 0.8112 | 0.7090 | 0.8078 | 0.8784 | 0.9318 | 0.8020 | 0.8765 | 0.5168 | 0.9099 | 0.8361 | 0.7814 | 0.8673 | 0.6797 | 0.7645 | 0.4279 | 0.7846 | 0.7216 | 0.7176 | | 0.0573 | 24.81 | 9180 | 0.8119 | 0.7100 | 0.8115 | 0.8790 | 0.9326 | 0.8155 | 0.8656 | 0.5155 | 0.9044 | 0.8623 | 0.7843 | 0.8677 | 0.6781 | 0.7643 | 0.4283 | 0.7864 | 0.7256 | 0.7192 | | 0.1477 | 24.86 | 9200 | 0.7879 | 0.7094 | 0.8093 | 0.8790 | 0.9347 | 0.8030 | 0.8710 | 0.5039 | 0.9020 | 0.8678 | 0.7827 | 0.8678 | 0.6796 | 0.7638 | 0.4252 | 0.7865 | 0.7250 | 0.7183 | | 0.085 | 24.92 | 9220 | 0.7683 | 0.7080 | 0.8095 | 0.8780 | 0.9361 | 0.8065 | 0.8634 | 0.5238 | 0.9038 | 0.8640 | 0.7689 | 0.8671 | 0.6792 | 0.7658 | 0.4320 | 0.7884 | 0.7163 | 0.7071 | | 0.1344 | 24.97 | 9240 | 0.7613 | 0.7092 | 0.8104 | 0.8790 | 0.9352 | 0.8067 | 0.8732 | 0.5054 | 0.8997 | 0.8714 | 0.7814 | 0.8677 | 0.6795 | 0.7649 | 0.4259 | 0.7883 | 0.7211 | 0.7170 | ### Framework versions - Transformers 4.37.0 - Pytorch 2.1.2 - Datasets 2.17.1 - Tokenizers 0.15.1
guirnd/ppo-CartPole-v2
guirnd
2024-02-22T19:16:18Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2024-02-22T18:43:20Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -98.93 +/- 66.84 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 100000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.96 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'guirnd/ppo-CartPole-v2' 'batch_size': 512 'minibatch_size': 128} ```
DouglasPontes/2020-Q4-25p-filtered-random
DouglasPontes
2024-02-22T19:06:16Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "generated_from_trainer", "base_model:cardiffnlp/twitter-roberta-base-2019-90m", "base_model:finetune:cardiffnlp/twitter-roberta-base-2019-90m", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-02-19T22:08:29Z
--- license: mit base_model: cardiffnlp/twitter-roberta-base-2019-90m tags: - generated_from_trainer model-index: - name: 2020-Q4-25p-filtered-random results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 2020-Q4-25p-filtered-random This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-2019-90m](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.2681 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.1e-07 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 2400000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-------:|:---------------:| | No log | 0.02 | 8000 | 2.5802 | | 2.8151 | 0.04 | 16000 | 2.4882 | | 2.8151 | 0.07 | 24000 | 2.4292 | | 2.5636 | 0.09 | 32000 | 2.3980 | | 2.5636 | 0.11 | 40000 | 2.3799 | | 2.4947 | 0.13 | 48000 | 2.3665 | | 2.4947 | 0.16 | 56000 | 2.3455 | | 2.473 | 0.18 | 64000 | 2.3419 | | 2.473 | 0.2 | 72000 | 2.3307 | | 2.4512 | 0.22 | 80000 | 2.3289 | | 2.4512 | 0.25 | 88000 | 2.3250 | | 2.4421 | 0.27 | 96000 | 2.3189 | | 2.4421 | 0.29 | 104000 | 2.3200 | | 2.4354 | 0.31 | 112000 | 2.3155 | | 2.4354 | 0.34 | 120000 | 2.3138 | | 2.4324 | 0.36 | 128000 | 2.3054 | | 2.4324 | 0.38 | 136000 | 2.3028 | | 2.4253 | 0.4 | 144000 | 2.3029 | | 2.4253 | 0.43 | 152000 | 2.3006 | | 2.4156 | 0.45 | 160000 | 2.3001 | | 2.4156 | 0.47 | 168000 | 2.2980 | | 2.4165 | 0.49 | 176000 | 2.2913 | | 2.4165 | 0.52 | 184000 | 2.2974 | | 2.4131 | 0.54 | 192000 | 2.2906 | | 2.4131 | 0.56 | 200000 | 2.2908 | | 2.407 | 0.58 | 208000 | 2.2895 | | 2.407 | 0.61 | 216000 | 2.2865 | | 2.4153 | 0.63 | 224000 | 2.2914 | | 2.4153 | 0.65 | 232000 | 2.2806 | | 2.4011 | 0.67 | 240000 | 2.2819 | | 2.4011 | 0.7 | 248000 | 2.2854 | | 2.4087 | 0.72 | 256000 | 2.2837 | | 2.4087 | 0.74 | 264000 | 2.2866 | | 2.4059 | 0.76 | 272000 | 2.2855 | | 2.4059 | 0.79 | 280000 | 2.2868 | | 2.4086 | 0.81 | 288000 | 2.2770 | | 2.4086 | 0.83 | 296000 | 2.2789 | | 2.4093 | 0.85 | 304000 | 2.2792 | | 2.4093 | 0.88 | 312000 | 2.2797 | | 2.4036 | 0.9 | 320000 | 2.2794 | | 2.4036 | 0.92 | 328000 | 2.2768 | | 2.4063 | 0.94 | 336000 | 2.2836 | | 2.4063 | 0.97 | 344000 | 2.2809 | | 2.4047 | 0.99 | 352000 | 2.2808 | | 2.4047 | 1.01 | 360000 | 2.2840 | | 2.4084 | 1.03 | 368000 | 2.2799 | | 2.4084 | 1.06 | 376000 | 2.2726 | | 2.4041 | 1.08 | 384000 | 2.2824 | | 2.4041 | 1.1 | 392000 | 2.2781 | | 2.4034 | 1.12 | 400000 | 2.2751 | | 2.4034 | 1.15 | 408000 | 2.2761 | | 2.3951 | 1.17 | 416000 | 2.2732 | | 2.3951 | 1.19 | 424000 | 2.2710 | | 2.409 | 1.21 | 432000 | 2.2780 | | 2.409 | 1.24 | 440000 | 2.2715 | | 2.3985 | 1.26 | 448000 | 2.2790 | | 2.3985 | 1.28 | 456000 | 2.2766 | | 2.4016 | 1.3 | 464000 | 2.2745 | | 2.4016 | 1.32 | 472000 | 2.2719 | | 2.3978 | 1.35 | 480000 | 2.2755 | | 2.3978 | 1.37 | 488000 | 2.2699 | | 2.406 | 1.39 | 496000 | 2.2823 | | 2.406 | 1.41 | 504000 | 2.2736 | | 2.3958 | 1.44 | 512000 | 2.2728 | | 2.3958 | 1.46 | 520000 | 2.2763 | | 2.406 | 1.48 | 528000 | 2.2781 | | 2.406 | 1.5 | 536000 | 2.2723 | | 2.4 | 1.53 | 544000 | 2.2733 | | 2.4 | 1.55 | 552000 | 2.2715 | | 2.3998 | 1.57 | 560000 | 2.2716 | | 2.3998 | 1.59 | 568000 | 2.2751 | | 2.4017 | 1.62 | 576000 | 2.2743 | | 2.4017 | 1.64 | 584000 | 2.2739 | | 2.4019 | 1.66 | 592000 | 2.2755 | | 2.4019 | 1.68 | 600000 | 2.2691 | | 2.398 | 1.71 | 608000 | 2.2706 | | 2.398 | 1.73 | 616000 | 2.2703 | | 2.4027 | 1.75 | 624000 | 2.2657 | | 2.4027 | 1.77 | 632000 | 2.2674 | | 2.4 | 1.8 | 640000 | 2.2749 | | 2.4 | 1.82 | 648000 | 2.2714 | | 2.4046 | 1.84 | 656000 | 2.2695 | | 2.4046 | 1.86 | 664000 | 2.2724 | | 2.4033 | 1.89 | 672000 | 2.2697 | | 2.4033 | 1.91 | 680000 | 2.2697 | | 2.3981 | 1.93 | 688000 | 2.2674 | | 2.3981 | 1.95 | 696000 | 2.2669 | | 2.4029 | 1.98 | 704000 | 2.2755 | | 2.4029 | 2.0 | 712000 | 2.2664 | | 2.4046 | 2.02 | 720000 | 2.2759 | | 2.4046 | 2.04 | 728000 | 2.2689 | | 2.4056 | 2.07 | 736000 | 2.2710 | | 2.4056 | 2.09 | 744000 | 2.2744 | | 2.4036 | 2.11 | 752000 | 2.2653 | | 2.4036 | 2.13 | 760000 | 2.2642 | | 2.3961 | 2.16 | 768000 | 2.2703 | | 2.3961 | 2.18 | 776000 | 2.2683 | | 2.3939 | 2.2 | 784000 | 2.2746 | | 2.3939 | 2.22 | 792000 | 2.2667 | | 2.3998 | 2.25 | 800000 | 2.2690 | | 2.3998 | 2.27 | 808000 | 2.2697 | | 2.3921 | 2.29 | 816000 | 2.2681 | | 2.3921 | 2.31 | 824000 | 2.2740 | | 2.4011 | 2.34 | 832000 | 2.2704 | | 2.4011 | 2.36 | 840000 | 2.2666 | | 2.3948 | 2.38 | 848000 | 2.2689 | | 2.3948 | 2.4 | 856000 | 2.2742 | | 2.3957 | 2.43 | 864000 | 2.2755 | | 2.3957 | 2.45 | 872000 | 2.2689 | | 2.3971 | 2.47 | 880000 | 2.2717 | | 2.3971 | 2.49 | 888000 | 2.2690 | | 2.3982 | 2.52 | 896000 | 2.2645 | | 2.3982 | 2.54 | 904000 | 2.2726 | | 2.4005 | 2.56 | 912000 | 2.2628 | | 2.4005 | 2.58 | 920000 | 2.2726 | | 2.4037 | 2.6 | 928000 | 2.2760 | | 2.4037 | 2.63 | 936000 | 2.2662 | | 2.4031 | 2.65 | 944000 | 2.2729 | | 2.4031 | 2.67 | 952000 | 2.2706 | | 2.4025 | 2.69 | 960000 | 2.2684 | | 2.4025 | 2.72 | 968000 | 2.2635 | | 2.409 | 2.74 | 976000 | 2.2606 | | 2.409 | 2.76 | 984000 | 2.2664 | | 2.4085 | 2.78 | 992000 | 2.2647 | | 2.4085 | 2.81 | 1000000 | 2.2656 | | 2.3971 | 2.83 | 1008000 | 2.2655 | | 2.3971 | 2.85 | 1016000 | 2.2681 | | 2.3946 | 2.87 | 1024000 | 2.2671 | | 2.3946 | 2.9 | 1032000 | 2.2660 | | 2.4063 | 2.92 | 1040000 | 2.2697 | | 2.4063 | 2.94 | 1048000 | 2.2706 | | 2.399 | 2.96 | 1056000 | 2.2625 | | 2.399 | 2.99 | 1064000 | 2.2699 | | 2.4024 | 3.01 | 1072000 | 2.2622 | | 2.4024 | 3.03 | 1080000 | 2.2695 | | 2.4035 | 3.05 | 1088000 | 2.2700 | | 2.4035 | 3.08 | 1096000 | 2.2624 | | 2.4061 | 3.1 | 1104000 | 2.2690 | | 2.4061 | 3.12 | 1112000 | 2.2653 | | 2.4044 | 3.14 | 1120000 | 2.2679 | | 2.4044 | 3.17 | 1128000 | 2.2658 | | 2.3996 | 3.19 | 1136000 | 2.2680 | | 2.3996 | 3.21 | 1144000 | 2.2668 | | 2.3943 | 3.23 | 1152000 | 2.2689 | | 2.3943 | 3.26 | 1160000 | 2.2702 | | 2.3948 | 3.28 | 1168000 | 2.2653 | | 2.3948 | 3.3 | 1176000 | 2.2621 | | 2.4047 | 3.32 | 1184000 | 2.2723 | | 2.4047 | 3.35 | 1192000 | 2.2718 | | 2.4057 | 3.37 | 1200000 | 2.2668 | | 2.4057 | 3.39 | 1208000 | 2.2649 | | 2.3901 | 3.41 | 1216000 | 2.2699 | | 2.3901 | 3.44 | 1224000 | 2.2683 | | 2.3942 | 3.46 | 1232000 | 2.2679 | | 2.3942 | 3.48 | 1240000 | 2.2647 | | 2.4052 | 3.5 | 1248000 | 2.2656 | | 2.4052 | 3.53 | 1256000 | 2.2679 | | 2.401 | 3.55 | 1264000 | 2.2685 | | 2.401 | 3.57 | 1272000 | 2.2654 | | 2.4012 | 3.59 | 1280000 | 2.2607 | | 2.4012 | 3.62 | 1288000 | 2.2668 | | 2.4015 | 3.64 | 1296000 | 2.2672 | | 2.4015 | 3.66 | 1304000 | 2.2685 | | 2.4039 | 3.68 | 1312000 | 2.2675 | | 2.4039 | 3.71 | 1320000 | 2.2702 | | 2.3927 | 3.73 | 1328000 | 2.2689 | | 2.3927 | 3.75 | 1336000 | 2.2674 | | 2.3998 | 3.77 | 1344000 | 2.2694 | | 2.3998 | 3.8 | 1352000 | 2.2649 | | 2.404 | 3.82 | 1360000 | 2.2635 | | 2.404 | 3.84 | 1368000 | 2.2681 | | 2.4023 | 3.86 | 1376000 | 2.2601 | | 2.4023 | 3.88 | 1384000 | 2.2661 | | 2.393 | 3.91 | 1392000 | 2.2613 | | 2.393 | 3.93 | 1400000 | 2.2717 | | 2.402 | 3.95 | 1408000 | 2.2672 | | 2.402 | 3.97 | 1416000 | 2.2637 | | 2.4047 | 4.0 | 1424000 | 2.2705 | | 2.4047 | 4.02 | 1432000 | 2.2682 | | 2.4045 | 4.04 | 1440000 | 2.2630 | | 2.4045 | 4.06 | 1448000 | 2.2699 | | 2.3973 | 4.09 | 1456000 | 2.2579 | | 2.3973 | 4.11 | 1464000 | 2.2601 | | 2.399 | 4.13 | 1472000 | 2.2609 | | 2.399 | 4.15 | 1480000 | 2.2697 | | 2.399 | 4.18 | 1488000 | 2.2630 | | 2.399 | 4.2 | 1496000 | 2.2658 | | 2.3995 | 4.22 | 1504000 | 2.2656 | | 2.3995 | 4.24 | 1512000 | 2.2689 | | 2.3929 | 4.27 | 1520000 | 2.2678 | | 2.3929 | 4.29 | 1528000 | 2.2694 | | 2.404 | 4.31 | 1536000 | 2.2632 | | 2.404 | 4.33 | 1544000 | 2.2657 | | 2.3932 | 4.36 | 1552000 | 2.2642 | | 2.3932 | 4.38 | 1560000 | 2.2607 | | 2.3985 | 4.4 | 1568000 | 2.2635 | | 2.3985 | 4.42 | 1576000 | 2.2645 | | 2.3997 | 4.45 | 1584000 | 2.2654 | | 2.3997 | 4.47 | 1592000 | 2.2672 | | 2.396 | 4.49 | 1600000 | 2.2666 | | 2.396 | 4.51 | 1608000 | 2.2708 | | 2.4012 | 4.54 | 1616000 | 2.2707 | | 2.4012 | 4.56 | 1624000 | 2.2684 | | 2.4074 | 4.58 | 1632000 | 2.2676 | | 2.4074 | 4.6 | 1640000 | 2.2658 | | 2.3965 | 4.63 | 1648000 | 2.2716 | | 2.3965 | 4.65 | 1656000 | 2.2656 | | 2.4021 | 4.67 | 1664000 | 2.2690 | | 2.4021 | 4.69 | 1672000 | 2.2656 | | 2.3981 | 4.72 | 1680000 | 2.2659 | | 2.3981 | 4.74 | 1688000 | 2.2667 | | 2.3974 | 4.76 | 1696000 | 2.2655 | | 2.3974 | 4.78 | 1704000 | 2.2676 | | 2.3964 | 4.81 | 1712000 | 2.2655 | | 2.3964 | 4.83 | 1720000 | 2.2636 | | 2.3933 | 4.85 | 1728000 | 2.2679 | | 2.3933 | 4.87 | 1736000 | 2.2667 | | 2.4066 | 4.9 | 1744000 | 2.2647 | | 2.4066 | 4.92 | 1752000 | 2.2657 | | 2.4027 | 4.94 | 1760000 | 2.2628 | | 2.4027 | 4.96 | 1768000 | 2.2642 | | 2.4029 | 4.99 | 1776000 | 2.2677 | | 2.4029 | 5.01 | 1784000 | 2.2704 | | 2.3958 | 5.03 | 1792000 | 2.2650 | | 2.3958 | 5.05 | 1800000 | 2.2650 | | 2.4054 | 5.08 | 1808000 | 2.2680 | | 2.4054 | 5.1 | 1816000 | 2.2601 | | 2.3984 | 5.12 | 1824000 | 2.2671 | | 2.3984 | 5.14 | 1832000 | 2.2639 | | 2.4005 | 5.16 | 1840000 | 2.2629 | | 2.4005 | 5.19 | 1848000 | 2.2656 | | 2.3962 | 5.21 | 1856000 | 2.2646 | | 2.3962 | 5.23 | 1864000 | 2.2571 | | 2.4033 | 5.25 | 1872000 | 2.2689 | | 2.4033 | 5.28 | 1880000 | 2.2632 | | 2.4064 | 5.3 | 1888000 | 2.2633 | | 2.4064 | 5.32 | 1896000 | 2.2694 | | 2.3967 | 5.34 | 1904000 | 2.2685 | | 2.3967 | 5.37 | 1912000 | 2.2636 | | 2.4002 | 5.39 | 1920000 | 2.2687 | | 2.4002 | 5.41 | 1928000 | 2.2632 | | 2.4045 | 5.43 | 1936000 | 2.2625 | | 2.4045 | 5.46 | 1944000 | 2.2677 | | 2.4096 | 5.48 | 1952000 | 2.2563 | | 2.4096 | 5.5 | 1960000 | 2.2642 | | 2.4004 | 5.52 | 1968000 | 2.2692 | | 2.4004 | 5.55 | 1976000 | 2.2696 | | 2.4065 | 5.57 | 1984000 | 2.2579 | | 2.4065 | 5.59 | 1992000 | 2.2660 | | 2.4025 | 5.61 | 2000000 | 2.2654 | | 2.4025 | 5.64 | 2008000 | 2.2706 | | 2.3993 | 5.66 | 2016000 | 2.2704 | | 2.3993 | 5.68 | 2024000 | 2.2664 | | 2.4034 | 5.7 | 2032000 | 2.2659 | | 2.4034 | 5.73 | 2040000 | 2.2680 | | 2.4004 | 5.75 | 2048000 | 2.2611 | | 2.4004 | 5.77 | 2056000 | 2.2646 | | 2.4025 | 5.79 | 2064000 | 2.2682 | | 2.4025 | 5.82 | 2072000 | 2.2646 | | 2.4063 | 5.84 | 2080000 | 2.2598 | | 2.4063 | 5.86 | 2088000 | 2.2673 | | 2.4071 | 5.88 | 2096000 | 2.2646 | | 2.4071 | 5.91 | 2104000 | 2.2672 | | 2.401 | 5.93 | 2112000 | 2.2648 | | 2.401 | 5.95 | 2120000 | 2.2654 | | 2.402 | 5.97 | 2128000 | 2.2664 | | 2.402 | 6.0 | 2136000 | 2.2683 | | 2.4004 | 6.02 | 2144000 | 2.2618 | | 2.4004 | 6.04 | 2152000 | 2.2669 | | 2.4001 | 6.06 | 2160000 | 2.2630 | | 2.4001 | 6.09 | 2168000 | 2.2632 | | 2.4046 | 6.11 | 2176000 | 2.2696 | | 2.4046 | 6.13 | 2184000 | 2.2641 | | 2.405 | 6.15 | 2192000 | 2.2627 | | 2.405 | 6.18 | 2200000 | 2.2681 | | 2.4063 | 6.2 | 2208000 | 2.2604 | | 2.4063 | 6.22 | 2216000 | 2.2715 | | 2.3991 | 6.24 | 2224000 | 2.2683 | | 2.3991 | 6.27 | 2232000 | 2.2657 | | 2.405 | 6.29 | 2240000 | 2.2645 | | 2.405 | 6.31 | 2248000 | 2.2676 | | 2.3941 | 6.33 | 2256000 | 2.2706 | | 2.3941 | 6.36 | 2264000 | 2.2593 | | 2.4041 | 6.38 | 2272000 | 2.2679 | | 2.4041 | 6.4 | 2280000 | 2.2643 | | 2.4001 | 6.42 | 2288000 | 2.2728 | | 2.4001 | 6.44 | 2296000 | 2.2631 | | 2.3983 | 6.47 | 2304000 | 2.2636 | | 2.3983 | 6.49 | 2312000 | 2.2630 | | 2.4003 | 6.51 | 2320000 | 2.2663 | | 2.4003 | 6.53 | 2328000 | 2.2647 | | 2.3981 | 6.56 | 2336000 | 2.2669 | | 2.3981 | 6.58 | 2344000 | 2.2660 | | 2.3951 | 6.6 | 2352000 | 2.2692 | | 2.3951 | 6.62 | 2360000 | 2.2644 | | 2.4013 | 6.65 | 2368000 | 2.2610 | | 2.4013 | 6.67 | 2376000 | 2.2655 | | 2.4 | 6.69 | 2384000 | 2.2592 | | 2.4 | 6.71 | 2392000 | 2.2666 | | 2.3975 | 6.74 | 2400000 | 2.2685 | ### Framework versions - Transformers 4.35.0.dev0 - Pytorch 2.0.1+cu117 - Datasets 2.14.5 - Tokenizers 0.14.0
nvidia/Llama2-13B-SteerLM-RM
nvidia
2024-02-22T19:05:14Z
23
8
nemo
[ "nemo", "nvidia", "steerlm", "llama2", "reward model", "text-generation", "en", "dataset:nvidia/HelpSteer", "dataset:OpenAssistant/oasst1", "arxiv:2311.09528", "arxiv:2310.05344", "license:llama2", "region:us" ]
text-generation
2024-02-19T02:49:42Z
--- license: llama2 library_name: nemo language: - en pipeline_tag: text-generation inference: false fine-tuning: true tags: - nvidia - steerlm - llama2 - reward model datasets: - nvidia/HelpSteer - OpenAssistant/oasst1 --- # Llama2-13B-SteerLM-RM ## License The use of this model is governed by the [Llama 2 Community License Agreement](https://ai.meta.com/llama/license/) ## Description: Llama2-13B-SteerLM-RM is a 13 billion parameter language model (with context of up to 4,096 tokens) used as the Attribute Prediction Model in training [Llama2-70B-SteerLM-Chat](https://huggingface.co/nvidia/Llama2-70B-SteerLM-Chat) Attribute Prediction Model is a multi-aspect Reward Model that rates model responses on various aspects that makes a response desirable instead of a singular score in a conventional Reward Model. Given a conversation with multiple turns between user and assistant, it rates the following attributes (between 0 and 4) for every assistant turn. 1. **Quality**: Perceived goodness of response 2. **Toxicity**: Undesirable elements such as vulgar, harmful or potentially biased response 3. **Humor**: Sense of humor within response 4. **Creativity**: Willingness to generate non-conventional response 5. **Helpfulness**: Overall helpfulness of the response to the prompt. 6. **Correctness**: Inclusion of all pertinent facts without errors. 7. **Coherence**: Consistency and clarity of expression. 8. **Complexity**: Intellectual depth required to write response (i.e. whether the response can be written by anyone with basic language competency or requires deep domain expertise). 9. **Verbosity**: Amount of detail included in the response, relative to what is asked for in the prompt. The first four attributes are taken from the [Open Assistant](https://huggingface.co/datasets/OpenAssistant/oasst1) dataset while the others are taken from [HelpSteer](https://huggingface.co/datasets/nvidia/HelpSteer) dataset HelpSteer Paper : [HelpSteer: Multi-attribute Helpfulness Dataset for SteerLM](http://arxiv.org/abs/2311.09528) SteerLM Paper: [SteerLM: Attribute Conditioned SFT as an (User-Steerable) Alternative to RLHF](https://arxiv.org/abs/2310.05344) Llama2-13B-SteerLM-RM is trained with NVIDIA [NeMo-Aligner](https://github.com/NVIDIA/NeMo-Aligner), a scalable toolkit for performant and efficient model alignment. NeMo-Aligner is built using the [NeMo Framework](https://github.com/NVIDIA/NeMo) which allows for scaling training up to 1000s of GPUs using tensor, data and pipeline parallelism for all components of alignment. All of our checkpoints are cross compatible with the NeMo ecosystem, allowing for inference deployment and further customization. ## Usage: You can use the model with [NeMo Aligner](https://github.com/NVIDIA/NeMo-Aligner) following [SteerLM training user guide](https://docs.nvidia.com/nemo-framework/user-guide/latest/modelalignment/steerlm.html). This model can be useful to train a model like [Llama2-70B-SteerLM-Chat](https://huggingface.co/nvidia/Llama2-70B-SteerLM-Chat) or annotate the attributes for any conversation. 1. Spin up an inference server within the [NeMo Aligner container](https://github.com/NVIDIA/NeMo-Aligner/blob/main/Dockerfile) ```python python /opt/NeMo-Aligner/examples/nlp/gpt/serve_reward_model.py \ rm_model_file=Llama2-13B-SteerLM-RM.nemo \ trainer.num_nodes=1 \ trainer.devices=8 \ ++model.tensor_model_parallel_size=4 \ ++model.pipeline_model_parallel_size=1 \ inference.micro_batch_size=2 \ inference.port=1424 ``` 2. Annotate data files using the served reward model. If you are seeking to reproduce training of [Llama2-70B-SteerLM-Chat](https://huggingface.co/nvidia/Llama2-70B-SteerLM-Chat), this will be the Open Assistant train/val files. Then follow the next step to train a SteerLM model based on [SteerLM training user guide](https://docs.nvidia.com/nemo-framework/user-guide/latest/modelalignment/steerlm.html#step-5-train-the-attribute-conditioned-sft-model) . ```python python /opt/NeMo-Aligner/examples/nlp/data/steerlm/preprocess_openassistant_data.py --output_directory=data/oasst python /opt/NeMo-Aligner/examples/nlp/data/steerlm/attribute_annotate.py \ --input-file=data/oasst/train.jsonl \ --output-file=data/oasst/train_labeled.jsonl \ --port=1424 ``` 3. Alternatively, this can be any conversational data file (in .jsonl) in the following format, where each line looks like ```json { "conversations": [ {"value": <user_turn_1>, "from": "User", "label": None}, {"value": <assistant_turn_1>, "from": "Assistant", "label": <formatted_label_1>}, {"value": <user_turn_2>, "from": "User", "label": None}, {"value": <assistant_turn_2>, "from": "Assistant", "label": <formatted_label_2>}, ], "mask": "User" } ``` Ideally, each ```<formatted_label_n>``` refers to the ground truth label for the assistant turn but if they are not available, we can also use ```quality:4,toxicity:0,humor:0,creativity:0,helpfulness:4,correctness:4,coherence:4,complexity:4,verbosity:4``` ## Contact E-Mail: [Zhilin Wang](mailto:[email protected]) ## Citation If you find this dataset useful, please cite the following works ```bibtex @misc{wang2023helpsteer, title={HelpSteer: Multi-attribute Helpfulness Dataset for SteerLM}, author={Zhilin Wang and Yi Dong and Jiaqi Zeng and Virginia Adams and Makesh Narsimhan Sreedhar and Daniel Egert and Olivier Delalleau and Jane Polak Scowcroft and Neel Kant and Aidan Swope and Oleksii Kuchaiev}, year={2023}, eprint={2311.09528}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ```bibtex @misc{dong2023steerlm, title={SteerLM: Attribute Conditioned SFT as an (User-Steerable) Alternative to RLHF}, author={Yi Dong and Zhilin Wang and Makesh Narsimhan Sreedhar and Xianchao Wu and Oleksii Kuchaiev}, year={2023}, eprint={2310.05344}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
AlexanderHolmes0/llama-2-7b-chat-test
AlexanderHolmes0
2024-02-22T19:03:20Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-22T18:58:20Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dataautogpt3/ProteusV0.4
dataautogpt3
2024-02-22T19:01:41Z
996
75
diffusers
[ "diffusers", "text-to-image", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-02-22T13:50:29Z
--- pipeline_tag: text-to-image widget: - text: >- 3 fish in a fish tank wearing adorable outfits, best quality, hd output: url: GGuziQaXYAAudCW.png - text: >- a woman sitting in a wooden chair in the middle of a grass field on a farm, moonlight, best quality, hd, anime art output: url: upscaled_image (1).webp - text: >- Masterpiece, glitch, holy holy holy, fog, by DarkIncursio output: url: GGvDC_qWUAAcuQA.jpeg - text: >- jpeg Full Body Photo of a weird imaginary Female creatures captured on celluloid film, (((ghost))),heavy rain, thunder, snow, water's surface, night, expressionless, Blood, Japan God,(school), Ultra Realistic, ((Scary)),looking at camera, screem, plaintive cries, Long claws, fangs, scales,8k, HDR, 500px, mysterious and ornate digital art, photic, intricate, fantasy aesthetic. output: url: upscaled_image2.png - text: >- The divine tree of knowledge, an interplay between purple and gold, floats in the void of the sea of quanta, the tree is made of crystal, the void is made of nothingness, strong contrast, dim lighting, beautiful and surreal scene. wide shot output: url: upscaled_image.png - text: >- The image features an older man, a long white beard and mustache, He has a stern expression, giving the impression of a wise and experienced individual. The mans beard and mustache are prominent, adding to his distinguished appearance. The close-up shot of the mans face emphasizes his facial features and the intensity of his gaze. output: url: old.png - text: >- Ghost in the Shell Stand Alone Complex output: url: upscaled_image4.png - text: >- (impressionistic realism by csybgh), a 50 something male, working in banking, very short dyed dark curly balding hair, Afro-Asiatic ancestry, talks a lot but listens poorly, stuck in the past, wearing a suit, he has a certain charm, bronze skintone, sitting in a bar at night, he is smoking and feeling cool, drunk on plum wine, masterpiece, 8k, hyper detailed, smokey ambiance, perfect hands AND fingers output: url: collage.png - text: >- black fluffy gorgeous dangerous cat animal creature, large orange eyes, big fluffy ears, piercing gaze, full moon, dark ambiance, best quality, extremely detailed output: url: collage2.png license: gpl-3.0 --- <Gallery /> ## ProteusV0.4: The Style Update This update enhances stylistic capabilities, similar to Midjourney's approach, rather than advancing prompt comprehension. Methods used do not infringe on any copyrighted material. ## Proteus Proteus serves as a sophisticated enhancement over OpenDalleV1.1, leveraging its core functionalities to deliver superior outcomes. Key areas of advancement include heightened responsiveness to prompts and augmented creative capacities. To achieve this, it was fine-tuned using approximately 220,000 GPTV captioned images from copyright-free stock images (with some anime included), which were then normalized. Additionally, DPO (Direct Preference Optimization) was employed through a collection of 10,000 carefully selected high-quality, AI-generated image pairs. In pursuit of optimal performance, numerous LORA (Low-Rank Adaptation) models are trained independently before being selectively incorporated into the principal model via dynamic application methods. These techniques involve targeting particular segments within the model while avoiding interference with other areas during the learning phase. Consequently, Proteus exhibits marked improvements in portraying intricate facial characteristics and lifelike skin textures, all while sustaining commendable proficiency across various aesthetic domains, notably surrealism, anime, and cartoon-style visualizations. finetuned/trained on a total of 400k+ images at this point. ## Settings for ProteusV0.4 Use these settings for the best results with ProteusV0.4: CFG Scale: Use a CFG scale of 4 to 6 Steps: 20 to 60 steps for more detail, 20 steps for faster results. Sampler: DPM++ 2M SDE Scheduler: Karras Resolution: 1280x1280 or 1024x1024 please also consider using these keep words to improve your prompts: best quality, HD, `~*~aesthetic~*~`. if you are having trouble coming up with prompts you can use this GPT I put together to help you refine the prompt. https://chat.openai.com/g/g-RziQNoydR-diffusion-master ## Use it with 🧨 diffusers ```python import torch from diffusers import ( StableDiffusionXLPipeline, KDPM2AncestralDiscreteScheduler, AutoencoderKL ) # Load VAE component vae = AutoencoderKL.from_pretrained( "madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16 ) # Configure the pipeline pipe = StableDiffusionXLPipeline.from_pretrained( "dataautogpt3/ProteusV0.4", vae=vae, torch_dtype=torch.float16 ) pipe.scheduler = KDPM2AncestralDiscreteScheduler.from_config(pipe.scheduler.config) pipe.to('cuda') # Define prompts and generate image prompt = "black fluffy gorgeous dangerous cat animal creature, large orange eyes, big fluffy ears, piercing gaze, full moon, dark ambiance, best quality, extremely detailed" negative_prompt = "nsfw, bad quality, bad anatomy, worst quality, low quality, low resolutions, extra fingers, blur, blurry, ugly, wrongs proportions, watermark, image artifacts, lowres, ugly, jpeg artifacts, deformed, noisy image" image = pipe( prompt, negative_prompt=negative_prompt, width=1024, height=1024, guidance_scale=4, num_inference_steps=20 ).images[0] ``` please support the work I do through donating to me on: https://www.buymeacoffee.com/DataVoid or following me on https://twitter.com/DataPlusEngine
numen-tech/BioMistral-7B-w4a16g128asym
numen-tech
2024-02-22T18:58:51Z
0
0
null
[ "arxiv:2308.13137", "license:apache-2.0", "region:us" ]
null
2024-02-22T18:56:10Z
--- license: apache-2.0 --- 4-bit [OmniQuant](https://arxiv.org/abs/2308.13137) quantized version of [BioMistral-7B](https://huggingface.co/BioMistral/BioMistral-7B).