modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
timestamp[us, tz=UTC]
card
stringlengths
1
901k
Meziane/qwuestion_answering_T5_policy_qa_4
Meziane
2024-07-01T13:30:18Z
0
0
null
[ "region:us" ]
null
2024-07-01T13:30:18Z
Entry not found
Edgar404/donut-shivi-cheques_best_320_test
Edgar404
2024-07-02T04:04:13Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-07-01T13:30:20Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mjm4dl/slot_filling_only_generated_llama3.csv
mjm4dl
2024-07-01T13:34:13Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-07-01T13:30:55Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
aivatech/ai_friend
aivatech
2024-07-01T13:40:18Z
0
0
null
[ "region:us" ]
null
2024-07-01T13:31:06Z
This project is created for making ai friend with using ollama gemma2 and elevenlabs
Moriacrafter/Qwen1.5-4B-8bit_DepressionDetection
Moriacrafter
2024-07-01T13:34:15Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-07-01T13:31:20Z
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nowsyn/anycontrol
nowsyn
2024-07-01T14:43:58Z
0
0
null
[ "safetensors", "license:apache-2.0", "region:us" ]
null
2024-07-01T13:31:37Z
--- license: apache-2.0 --- # AnyControl: Create Your Artwork with Versatile Control on Text-to-Image Generation [Yanan Sun](https://scholar.google.com/citations?user=6TA1oPkAAAAJ&hl=en), Yanchen Liu, Yinhao Tang, [Wenjie Pei](https://wenjiepei.github.io/) and [Kai Chen*](https://chenkai.site/) **Shanghai AI Laboratory** ![](./assets/teaser.png "AnyControl") ## Overview The field of text-to-image (T2I) generation has made significant progress in recent years, largely driven by advancements in diffusion models. Linguistic control enables effective content creation, but struggles with fine-grained control over image generation. This challenge has been explored, to a great extent, by incorporating additional usersupplied spatial conditions, such as depth maps and edge maps, into pre-trained T2I models through extra encoding. However, multi-control image synthesis still faces several challenges. Specifically, current approaches are limited in handling free combinations of diverse input control signals, overlook the complex relationships among multiple spatial conditions, and often fail to maintain semantic alignment with provided textual prompts. This can lead to suboptimal user experiences. To address these challenges, we propose AnyControl, a multi-control image synthesis framework that supports arbitrary combinations of diverse control signals. AnyControl develops a novel Multi-Control Encoder that extracts a unified multi-modal embedding to guide the generation process. This approach enables a holistic understanding of user inputs, and produces high-quality, faithful results under versatile control signals, as demonstrated by extensive quantitative and qualitative evaluations. ## Model Card AnyControl for SD 1.5 - `ckpts/anycontrol_15.ckpt`: weights for AnyControl. - `ckpts/init_local.ckpt`: initial weights of AnyControl during training, generated following [Uni-ControlNet](https://github.com/ShihaoZhaoZSH/Uni-ControlNet). - `ckpts/blip2_pretrained.pth`: third-party model. - `annotator/ckpts`: third-party models used in annotators. ## License and Citation All models and assets are under the [Apache 2.0 license](./LICENSE) unless specified otherwise. If this work is helpful for your research, please consider citing the following BibTeX entry. ``` bibtex @misc{sun2024anycontrol, title={AnyControl: Create your artwork with versatile control on text-to-image generation}, author={Sun, Yanan and Liu, Yanchen and Tang, Yinhao and Pei, Wenjie and Chen, Kai}, booktitle={ECCV}, year={2024} } ```
NikiSP/results
NikiSP
2024-07-01T13:32:09Z
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-07-01T13:31:48Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer metrics: - precision - recall - accuracy - f1 model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.0751 - Precision: 0.5648 - Recall: 0.5655 - Accuracy: 0.5655 - F1: 0.5649 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:--------:|:------:| | 1.1943 | 1.0 | 367 | 1.1549 | 0.5341 | 0.5155 | 0.5155 | 0.5161 | | 0.9064 | 2.0 | 734 | 1.1347 | 0.5568 | 0.5608 | 0.5608 | 0.5528 | | 0.512 | 3.0 | 1101 | 1.4481 | 0.5636 | 0.5330 | 0.5330 | 0.5261 | | 0.228 | 4.0 | 1468 | 1.7226 | 0.5633 | 0.5608 | 0.5608 | 0.5588 | | 0.1355 | 5.0 | 1835 | 2.0751 | 0.5648 | 0.5655 | 0.5655 | 0.5649 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Tokenizers 0.19.1
NikiSP/Movie_Genre_Classifier
NikiSP
2024-07-01T13:32:11Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-07-01T13:31:48Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Addax-Data-Science/Iran_v1
Addax-Data-Science
2024-07-01T13:35:32Z
0
0
null
[ "license:cc-by-nc-sa-4.0", "region:us" ]
null
2024-07-01T13:32:01Z
--- license: cc-by-nc-sa-4.0 --- Model to identify 14 species or higher-level taxons present in Iran. The model was trained on a set of approximately 1 million camera trap images. The model has an overall validation accuracy, precision, and recall of 95%, 93%, and 94%, respectively. The accuracy was not tested on an out-of-sample-dataset since local images were absent. The model was designed to expedite the monitoring of the Iranian Cheetah Society.
Franzin/bigbird-roberta-base-goemotions-ekman-multiclass
Franzin
2024-07-01T13:32:29Z
0
0
transformers
[ "transformers", "safetensors", "big_bird", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-07-01T13:32:17Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
OnFinanceAI/setup__llama_instr_ft
OnFinanceAI
2024-07-01T13:33:09Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-07-01T13:32:42Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Torazo5/marian-finetuned-kde4-en-to-fr
Torazo5
2024-07-01T13:33:48Z
0
0
null
[ "region:us" ]
null
2024-07-01T13:33:48Z
Entry not found
hcy5561/tapas-base-finetuned-sqa-model
hcy5561
2024-07-01T13:33:53Z
0
0
null
[ "region:us" ]
null
2024-07-01T13:33:53Z
Entry not found
trustlelab/bald-hair-classification
trustlelab
2024-07-01T13:40:19Z
0
0
keras
[ "keras", "image-classification", "en", "region:us" ]
image-classification
2024-07-01T13:34:01Z
--- language: - en library_name: keras pipeline_tag: image-classification ---
itay-nakash/model_42d9b05c5c_sweep_super-gorge-1156
itay-nakash
2024-07-01T13:34:51Z
0
0
null
[ "region:us" ]
null
2024-07-01T13:34:51Z
Entry not found
Vinay-96/LLama2_Finetuned_jeopardy_QnA
Vinay-96
2024-07-01T13:52:21Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-classification
2024-07-01T13:35:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sara-m98/ECO_BERT-BILSTM_FINAL
sara-m98
2024-07-02T07:05:49Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-07-01T13:35:39Z
och Training Loss Validation Loss Precision Recall F1 Accuracy 1 No log 0.066976 0.387250 0.329733 0.356184 0.984486 2 0.106500 0.058111 0.395311 0.391574 0.393434 0.985573 3 0.106500 0.058637 0.393105 0.458817 0.423427 0.985601 4 0.028900 0.065462 0.396026 0.462868 0.426846 0.984953 5 0.028900 0.070476 0.442568 0.467189 0.454545 0.985966 6 0.015500 0.069267 0.425043 0.469349 0.446099 0.986287 7 0.015500 0.078341 0.451092 0.490683 0.470056 0.986064 8 0.009200 0.085571 0.431022 0.481772 0.454986 0.985717 9 0.009200 0.087308 0.442126 0.480691 0.460603 0.985725 10 0.005900 0.092452 0.463224 0.479611 0.471275 0.986054 11 0.005900 0.092327 0.437395 0.493384 0.463706 0.985844 12 0.004300 0.100381 0.452416 0.495544 0.472999 0.986165 13 0.004300 0.092396 0.446150 0.486632 0.465513 0.986190 14 0.003100 0.096234 0.467906 0.496084 0.481583 0.986704 15 0.003100 0.102940 0.452968 0.486362 0.469071 0.986141 16 0.002500 0.101856 0.464897 0.491763 0.477953 0.986278 17 0.002500 0.105866 0.456962 0.487443 0.471710 0.986066 18 0.002200 0.106126 0.474086 0.486632 0.480277 0.986332 19 0.002200 0.107025 0.462751 0.491493 0.476689 0.986511 20 0.001700 0.106984 0.469367 0.494464 0.481589 0.986270 21 0.001700 0.106669 0.474589 0.506886 0.490206 0.986484 22 0.001500 0.111114 0.471334 0.501755 0.486069 0.986311 23 0.001500 0.112432 0.460683 0.492033 0.475842 0.986464 24 0.001300 0.110102 0.475635 0.506076 0.490383 0.986519 25 0.001300 0.116960 0.467344 0.496624 0.481540 0.986453 26 0.001200 0.118941 0.469414 0.495274 0.481997 0.986373 27 0.001200 0.120318 0.476589 0.492033 0.484188 0.986326 28 0.001100 0.120894 0.481053 0.493654 0.487272 0.986472 29 0.001100 0.122397 0.481988 0.495004 0.488409 0.986480 30 0.000900 0.120894 0.474607 0.497164 0.485624 0.986445 31 0.000900 0.121767 0.475380 0.497975 0.486415 0.986499 32 0.000900 0.121362 0.476471 0.503106 0.489426 0.986528 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6634a6ddbcf56d1302dc1e82/bibA-cmjmjxkqtRYPgNBk.png)
Grayx/john_paul_van_damme_62
Grayx
2024-07-01T13:36:13Z
0
0
null
[ "region:us" ]
null
2024-07-01T13:35:58Z
Entry not found
Adi-0-0-Gupta/Embedding-v2-64
Adi-0-0-Gupta
2024-07-01T13:36:17Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:75086", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:1705.00652", "model-index", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
2024-07-01T13:36:14Z
--- datasets: [] language: [] library_name: sentence-transformers metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:75086 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss widget: - source_sentence: 'Tag: Vegetable Pulao Preparation time (ingredient) for Vegetable Pulao based on different serving sizes: Serving 1 - 15 mins, Serving 2 - 18 mins, Serving 3 - 20 mins, Serving 4 - 22 mins' sentences: - What diet tags are relevant to Kimchi Fried Rice (Chicken)? - What dietary classifications are suitable for Chicken & Broccoli Alfredo? - What is the time required to prepare ingredients for Vegetable Pulao? - source_sentence: "Tag: Vegetable Pulao\n\nMacro ingredients required to cook Vegetable\ \ Pulao:\nOrange Carrot, French Bean, Cauliflower, Plain Unsweetened Yogurt, Red\ \ Onion, Clove, Bay Leaf, Green Cardamom, Ginger-Garlic Paste, Green Chili Pepper,\ \ Cinnamon, Basmati Rice, Fresh Cilantro, Fresh Mint\n\nPreparations (ingredient)\ \ needed to cook Vegetable Pulao:\nWash the rice twice, and then soak it for at\ \ least 20 minutes. Drain the water and transfer the rice into the macro container.\ \ \nMix the yogurt, whole spices, green chili, and ginger garlic paste with the\ \ chopped veggies in a separate bowl, and then transfer it to the macro container.\ \ Please make sure to use plain yogurt. If using greek yogurt, use half the quantity\ \ of plain yogurt.\n\nTotal calories (nutritional energy) in Vegetable Pulao based\ \ on different serving sizes: Serving 1 - 300 mins, Serving 2 - 500 mins, Serving\ \ 3 - 700 mins, Serving 4 - 900 mins" sentences: - Can you give me some insights into Scrambled Eggs with Veggies? - How should the ingredients for Chicken Pad Thai be prepared? - What’s the calorie figure for Vegetable Pulao? - source_sentence: 'Tag: Chicken Pad Thai Spatula required to cook chicken pad thai based on different serving sizes: Serving 1 - noodle spatula, Serving 2 - noodle spatula, Serving 3 - noodle spatula, Serving 4 - noodle spatula' sentences: - What are the detailed cooking instructions for Rava Upma? - What’s the best way to prep ingredients for Teriyaki Tofu? - What kind of spatula do you need for Chicken Pad Thai? - source_sentence: 'Tag: Kimchi Fried Rice (Chicken) A small description of Kimchi Fried Rice (Chicken): Kimchi fried rice is made with kimchi, spicy gochujang, and garlic. The umami flavors from the kimchi juice balance beautifully with the spicy gochujang sauce and soy sauce, also creating that beautiful red-tinted color. ' sentences: - What spatula would you recommend for Vegetable Pulao? - How can I improve the presentation of Chicken Pad Thai with garnishes? - How would you describe the dish Kimchi Fried Rice (Chicken)? - source_sentence: 'Tag: Rava Upma Cook time of Rava Upma based on different serving sizes: Serving 1 - 26 mins, Serving 2 - 26 mins, Serving 3 - 28 mins, Serving 4 - 30 mins Preparation time (ingredient) for Rava Upma based on different serving sizes: Serving 1 - 6 mins, Serving 2 - 7 mins, Serving 3 - 8 mins, Serving 4 - 10 mins' sentences: - How long does it take to prepare ingredients for Rava Upma? - What are some final touch tips for Rava Upma? - How would you summarize Mac & Cheese? model-index: - name: SentenceTransformer results: - task: type: information-retrieval name: Information Retrieval dataset: name: dim 384 type: dim_384 metrics: - type: cosine_accuracy@1 value: 0.9431019051272216 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.9684183608234241 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.9704641350210971 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9943741209563994 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.9431019051272216 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.8694540340109961 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.8533691343817927 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.795371435877765 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.15481705915468094 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.24073986604144076 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.32643031270609696 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.5043459566396565 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.918622245854727 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.9581816761141663 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.7389009694677152 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 256 type: dim_256 metrics: - type: cosine_accuracy@1 value: 0.9448919575501854 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.9704641350210971 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.97250990921877 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9942462600690449 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.9448919575501854 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.8702211993351234 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.8534458509142053 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.796407109065337 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.15493928650775676 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.24095543792672336 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.32656065320980354 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.5047541251867395 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.9196805870274929 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.9598828347773509 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.7369163574373101 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 128 type: dim_128 metrics: - type: cosine_accuracy@1 value: 0.9487277841708222 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.9727656309934791 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.9758342922899885 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9941183991816903 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.9487277841708222 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.8725653156032902 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.8555427694668201 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.7964454673315434 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.15515249707637835 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.24121211958393993 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.3269542903263234 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.5049379410293565 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.9205498440081438 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.9627833488592988 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.7339144971303314 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 64 type: dim_64 metrics: - type: cosine_accuracy@1 value: 0.949622810382304 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.9744278225290883 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.9777522056003068 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9943741209563994 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.949622810382304 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.8755061160124451 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.8594553126198696 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.7982610919319781 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.15511449448310274 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.2414942027072444 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.32761337610101815 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.5053322703457185 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.9226639618734229 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.9639710344351698 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.7303591002200212 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 32 type: dim_32 metrics: - type: cosine_accuracy@1 value: 0.9451476793248945 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.9731492136555427 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.9783915100370797 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9947577036184632 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.9451476793248945 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.8679623236585261 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.8521416698631887 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.7814473852448537 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.15477631695655358 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.2399798683039478 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.3251298048319623 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.49721034132531955 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.9089797806768267 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.9616667478481831 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.708556549543554 name: Cosine Map@100 --- # SentenceTransformer This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 384 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the πŸ€— Hub model = SentenceTransformer("Adi-0-0-Gupta/Embedding-v2-64") # Run inference sentences = [ 'Tag: Rava Upma\n\nCook time of Rava Upma based on different serving sizes: Serving 1 - 26 mins, Serving 2 - 26 mins, Serving 3 - 28 mins, Serving 4 - 30 mins\n\nPreparation time (ingredient) for Rava Upma based on different serving sizes: Serving 1 - 6 mins, Serving 2 - 7 mins, Serving 3 - 8 mins, Serving 4 - 10 mins', 'How long does it take to prepare ingredients for Rava Upma?', 'What are some final touch tips for Rava Upma?', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Dataset: `dim_384` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.9431 | | cosine_accuracy@3 | 0.9684 | | cosine_accuracy@5 | 0.9705 | | cosine_accuracy@10 | 0.9944 | | cosine_precision@1 | 0.9431 | | cosine_precision@3 | 0.8695 | | cosine_precision@5 | 0.8534 | | cosine_precision@10 | 0.7954 | | cosine_recall@1 | 0.1548 | | cosine_recall@3 | 0.2407 | | cosine_recall@5 | 0.3264 | | cosine_recall@10 | 0.5043 | | cosine_ndcg@10 | 0.9186 | | cosine_mrr@10 | 0.9582 | | **cosine_map@100** | **0.7389** | #### Information Retrieval * Dataset: `dim_256` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.9449 | | cosine_accuracy@3 | 0.9705 | | cosine_accuracy@5 | 0.9725 | | cosine_accuracy@10 | 0.9942 | | cosine_precision@1 | 0.9449 | | cosine_precision@3 | 0.8702 | | cosine_precision@5 | 0.8534 | | cosine_precision@10 | 0.7964 | | cosine_recall@1 | 0.1549 | | cosine_recall@3 | 0.241 | | cosine_recall@5 | 0.3266 | | cosine_recall@10 | 0.5048 | | cosine_ndcg@10 | 0.9197 | | cosine_mrr@10 | 0.9599 | | **cosine_map@100** | **0.7369** | #### Information Retrieval * Dataset: `dim_128` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.9487 | | cosine_accuracy@3 | 0.9728 | | cosine_accuracy@5 | 0.9758 | | cosine_accuracy@10 | 0.9941 | | cosine_precision@1 | 0.9487 | | cosine_precision@3 | 0.8726 | | cosine_precision@5 | 0.8555 | | cosine_precision@10 | 0.7964 | | cosine_recall@1 | 0.1552 | | cosine_recall@3 | 0.2412 | | cosine_recall@5 | 0.327 | | cosine_recall@10 | 0.5049 | | cosine_ndcg@10 | 0.9205 | | cosine_mrr@10 | 0.9628 | | **cosine_map@100** | **0.7339** | #### Information Retrieval * Dataset: `dim_64` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.9496 | | cosine_accuracy@3 | 0.9744 | | cosine_accuracy@5 | 0.9778 | | cosine_accuracy@10 | 0.9944 | | cosine_precision@1 | 0.9496 | | cosine_precision@3 | 0.8755 | | cosine_precision@5 | 0.8595 | | cosine_precision@10 | 0.7983 | | cosine_recall@1 | 0.1551 | | cosine_recall@3 | 0.2415 | | cosine_recall@5 | 0.3276 | | cosine_recall@10 | 0.5053 | | cosine_ndcg@10 | 0.9227 | | cosine_mrr@10 | 0.964 | | **cosine_map@100** | **0.7304** | #### Information Retrieval * Dataset: `dim_32` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.9451 | | cosine_accuracy@3 | 0.9731 | | cosine_accuracy@5 | 0.9784 | | cosine_accuracy@10 | 0.9948 | | cosine_precision@1 | 0.9451 | | cosine_precision@3 | 0.868 | | cosine_precision@5 | 0.8521 | | cosine_precision@10 | 0.7814 | | cosine_recall@1 | 0.1548 | | cosine_recall@3 | 0.24 | | cosine_recall@5 | 0.3251 | | cosine_recall@10 | 0.4972 | | cosine_ndcg@10 | 0.909 | | cosine_mrr@10 | 0.9617 | | **cosine_map@100** | **0.7086** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 75,086 training samples * Columns: <code>positive</code> and <code>anchor</code> * Approximate statistics based on the first 1000 samples: | | positive | anchor | |:--------|:-------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 20 tokens</li><li>mean: 150.64 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 15.44 tokens</li><li>max: 22 tokens</li></ul> | * Samples: | positive | anchor | |:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------| | <code>Tag: Beef and Broccoli<br><br>Spatula required to cook beef and broccoli based on different serving sizes: Serving 1 - flipping spatula, Serving 2 - flipping spatula, Serving 3 - flipping spatula, Serving 4 - flipping spatula<br><br>Recipes similar to beef and broccoli: Pepper Steak Skillet, Beef & Arugula Stir-Fry, Sticky Beef & Zucchini, Beef Kaldereta, Garlic Butter Steak Bites, Roasted Broccoli & Carrots, Beef Skillet Lasagna, Beef Stew, Keto Beef & Cabbage<br><br>Garnishing tips for Beef and Broccoli: Best served on it's own or on top of hot rice with chopped scallions!<br><br>A small description of Beef and Broccoli: Stir fried broccoli and tender beef strips stir-fried in a rich savory sauce.<br><br>For Beef and Broccoli, these dietary tags go well with it: dinner, contains soy, meat recipes, asian american cuisine, lunch, american cuisine, beef recipes, asian cuisine, chinese cuisine, hearty recipes, rice recipes, protein rich recipes, non vegetarian, saucy recipes, stir fry recipes, healthy recipes</code> | <code>How do you describe Beef and Broccoli?</code> | | <code>Tag: Beef and Broccoli<br><br>A small description of Beef and Broccoli: Stir fried broccoli and tender beef strips stir-fried in a rich savory sauce.</code> | <code>How do you describe Beef and Broccoli?</code> | | <code>Tag: Beef and Broccoli<br><br>Garnishing tips for Beef and Broccoli: Best served on it's own or on top of hot rice with chopped scallions!<br><br>Preparations (ingredient) needed to cook Beef and Broccoli:<br>Marinate the beef slices with soy sauce and bakig soda for at least 20 minutes. Use rib-eye steak for best results. Alternatively you can also use flank steak or skirt steak.<br><br>Recipes similar to beef and broccoli: Pepper Steak Skillet, Beef & Arugula Stir-Fry, Sticky Beef & Zucchini, Beef Kaldereta, Garlic Butter Steak Bites, Roasted Broccoli & Carrots, Beef Skillet Lasagna, Beef Stew, Keto Beef & Cabbage<br><br>Cook time of Beef and Broccoli based on different serving sizes: Serving 1 - 20 mins, Serving 2 - 25 mins, Serving 3 - 30 mins, Serving 4 - 35 mins<br><br>Macro ingredients required to cook Beef and Broccoli:<br>Broccoli, Soy Sauce, Ribeye Steak, Soy Sauce, Garlic, Scallion, Ginger, Baking Soda</code> | <code>What are some classic garnishes for Beef and Broccoli?</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 384, 256, 128, 64, 32 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `gradient_accumulation_steps`: 16 - `learning_rate`: 2e-05 - `num_train_epochs`: 100 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.1 - `bf16`: True - `tf32`: True - `load_best_model_at_end`: True - `optim`: adamw_torch_fused - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 16 - `eval_accumulation_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 100 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: True - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_32_cosine_map@100 | dim_384_cosine_map@100 | dim_64_cosine_map@100 | |:------:|:----:|:-------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:|:---------------------:| | 0.0682 | 10 | 11.9607 | - | - | - | - | - | | 0.1363 | 20 | 12.1342 | - | - | - | - | - | | 0.2045 | 30 | 11.8794 | - | - | - | - | - | | 0.2727 | 40 | 11.838 | - | - | - | - | - | | 0.3409 | 50 | 11.9675 | - | - | - | - | - | | 0.4090 | 60 | 11.5518 | - | - | - | - | - | | 0.4772 | 70 | 11.3832 | - | - | - | - | - | | 0.5454 | 80 | 11.2516 | - | - | - | - | - | | 0.6135 | 90 | 11.1272 | - | - | - | - | - | | 0.6817 | 100 | 10.9423 | - | - | - | - | - | | 0.7499 | 110 | 5.0611 | - | - | - | - | - | | 0.8181 | 120 | 0.2761 | - | - | - | - | - | | 0.8862 | 130 | 5.5841 | - | - | - | - | - | | 0.9544 | 140 | 5.453 | - | - | - | - | - | | 1.0021 | 147 | - | 0.7339 | 0.7369 | 0.7086 | 0.7389 | 0.7304 | ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.0.1 - Transformers: 4.41.2 - PyTorch: 2.1.2+cu121 - Accelerate: 0.31.0 - Datasets: 2.19.1 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
Grayx/john_paul_van_damme_63
Grayx
2024-07-01T13:36:26Z
0
0
null
[ "region:us" ]
null
2024-07-01T13:36:14Z
Entry not found
Grayx/john_paul_van_damme_64
Grayx
2024-07-01T13:36:55Z
0
0
null
[ "region:us" ]
null
2024-07-01T13:36:44Z
Entry not found
Grayx/john_paul_van_damme_65
Grayx
2024-07-01T13:37:48Z
0
0
null
[ "region:us" ]
null
2024-07-01T13:37:36Z
Entry not found
Adi-0-0-Gupta/Embedding-v2-128
Adi-0-0-Gupta
2024-07-01T13:37:54Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:75086", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:1705.00652", "model-index", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
2024-07-01T13:37:50Z
--- datasets: [] language: [] library_name: sentence-transformers metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:75086 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss widget: - source_sentence: 'Tag: Vegetable Pulao Preparation time (ingredient) for Vegetable Pulao based on different serving sizes: Serving 1 - 15 mins, Serving 2 - 18 mins, Serving 3 - 20 mins, Serving 4 - 22 mins' sentences: - What diet tags are relevant to Kimchi Fried Rice (Chicken)? - What dietary classifications are suitable for Chicken & Broccoli Alfredo? - What is the time required to prepare ingredients for Vegetable Pulao? - source_sentence: "Tag: Vegetable Pulao\n\nMacro ingredients required to cook Vegetable\ \ Pulao:\nOrange Carrot, French Bean, Cauliflower, Plain Unsweetened Yogurt, Red\ \ Onion, Clove, Bay Leaf, Green Cardamom, Ginger-Garlic Paste, Green Chili Pepper,\ \ Cinnamon, Basmati Rice, Fresh Cilantro, Fresh Mint\n\nPreparations (ingredient)\ \ needed to cook Vegetable Pulao:\nWash the rice twice, and then soak it for at\ \ least 20 minutes. Drain the water and transfer the rice into the macro container.\ \ \nMix the yogurt, whole spices, green chili, and ginger garlic paste with the\ \ chopped veggies in a separate bowl, and then transfer it to the macro container.\ \ Please make sure to use plain yogurt. If using greek yogurt, use half the quantity\ \ of plain yogurt.\n\nTotal calories (nutritional energy) in Vegetable Pulao based\ \ on different serving sizes: Serving 1 - 300 mins, Serving 2 - 500 mins, Serving\ \ 3 - 700 mins, Serving 4 - 900 mins" sentences: - Can you give me some insights into Scrambled Eggs with Veggies? - How should the ingredients for Chicken Pad Thai be prepared? - What’s the calorie figure for Vegetable Pulao? - source_sentence: 'Tag: Chicken Pad Thai Spatula required to cook chicken pad thai based on different serving sizes: Serving 1 - noodle spatula, Serving 2 - noodle spatula, Serving 3 - noodle spatula, Serving 4 - noodle spatula' sentences: - What are the detailed cooking instructions for Rava Upma? - What’s the best way to prep ingredients for Teriyaki Tofu? - What kind of spatula do you need for Chicken Pad Thai? - source_sentence: 'Tag: Kimchi Fried Rice (Chicken) A small description of Kimchi Fried Rice (Chicken): Kimchi fried rice is made with kimchi, spicy gochujang, and garlic. The umami flavors from the kimchi juice balance beautifully with the spicy gochujang sauce and soy sauce, also creating that beautiful red-tinted color. ' sentences: - What spatula would you recommend for Vegetable Pulao? - How can I improve the presentation of Chicken Pad Thai with garnishes? - How would you describe the dish Kimchi Fried Rice (Chicken)? - source_sentence: 'Tag: Rava Upma Cook time of Rava Upma based on different serving sizes: Serving 1 - 26 mins, Serving 2 - 26 mins, Serving 3 - 28 mins, Serving 4 - 30 mins Preparation time (ingredient) for Rava Upma based on different serving sizes: Serving 1 - 6 mins, Serving 2 - 7 mins, Serving 3 - 8 mins, Serving 4 - 10 mins' sentences: - How long does it take to prepare ingredients for Rava Upma? - What are some final touch tips for Rava Upma? - How would you summarize Mac & Cheese? model-index: - name: SentenceTransformer results: - task: type: information-retrieval name: Information Retrieval dataset: name: dim 384 type: dim_384 metrics: - type: cosine_accuracy@1 value: 0.9431019051272216 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.9684183608234241 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.9704641350210971 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9943741209563994 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.9431019051272216 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.8694540340109961 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.8533691343817927 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.795371435877765 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.15481705915468094 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.24073986604144076 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.32643031270609696 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.5043459566396565 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.918622245854727 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.9581816761141663 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.7389009694677152 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 256 type: dim_256 metrics: - type: cosine_accuracy@1 value: 0.9448919575501854 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.9704641350210971 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.97250990921877 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9942462600690449 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.9448919575501854 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.8702211993351234 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.8534458509142053 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.796407109065337 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.15493928650775676 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.24095543792672336 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.32656065320980354 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.5047541251867395 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.9196805870274929 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.9598828347773509 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.7369163574373101 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 128 type: dim_128 metrics: - type: cosine_accuracy@1 value: 0.9487277841708222 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.9727656309934791 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.9758342922899885 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9941183991816903 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.9487277841708222 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.8725653156032902 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.8555427694668201 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.7964454673315434 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.15515249707637835 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.24121211958393993 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.3269542903263234 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.5049379410293565 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.9205498440081438 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.9627833488592988 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.7339144971303314 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 64 type: dim_64 metrics: - type: cosine_accuracy@1 value: 0.949622810382304 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.9744278225290883 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.9777522056003068 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9943741209563994 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.949622810382304 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.8755061160124451 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.8594553126198696 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.7982610919319781 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.15511449448310274 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.2414942027072444 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.32761337610101815 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.5053322703457185 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.9226639618734229 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.9639710344351698 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.7303591002200212 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 32 type: dim_32 metrics: - type: cosine_accuracy@1 value: 0.9451476793248945 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.9731492136555427 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.9783915100370797 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9947577036184632 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.9451476793248945 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.8679623236585261 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.8521416698631887 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.7814473852448537 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.15477631695655358 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.2399798683039478 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.3251298048319623 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.49721034132531955 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.9089797806768267 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.9616667478481831 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.708556549543554 name: Cosine Map@100 --- # SentenceTransformer This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 384 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the πŸ€— Hub model = SentenceTransformer("Adi-0-0-Gupta/Embedding-v2-128") # Run inference sentences = [ 'Tag: Rava Upma\n\nCook time of Rava Upma based on different serving sizes: Serving 1 - 26 mins, Serving 2 - 26 mins, Serving 3 - 28 mins, Serving 4 - 30 mins\n\nPreparation time (ingredient) for Rava Upma based on different serving sizes: Serving 1 - 6 mins, Serving 2 - 7 mins, Serving 3 - 8 mins, Serving 4 - 10 mins', 'How long does it take to prepare ingredients for Rava Upma?', 'What are some final touch tips for Rava Upma?', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Dataset: `dim_384` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.9431 | | cosine_accuracy@3 | 0.9684 | | cosine_accuracy@5 | 0.9705 | | cosine_accuracy@10 | 0.9944 | | cosine_precision@1 | 0.9431 | | cosine_precision@3 | 0.8695 | | cosine_precision@5 | 0.8534 | | cosine_precision@10 | 0.7954 | | cosine_recall@1 | 0.1548 | | cosine_recall@3 | 0.2407 | | cosine_recall@5 | 0.3264 | | cosine_recall@10 | 0.5043 | | cosine_ndcg@10 | 0.9186 | | cosine_mrr@10 | 0.9582 | | **cosine_map@100** | **0.7389** | #### Information Retrieval * Dataset: `dim_256` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.9449 | | cosine_accuracy@3 | 0.9705 | | cosine_accuracy@5 | 0.9725 | | cosine_accuracy@10 | 0.9942 | | cosine_precision@1 | 0.9449 | | cosine_precision@3 | 0.8702 | | cosine_precision@5 | 0.8534 | | cosine_precision@10 | 0.7964 | | cosine_recall@1 | 0.1549 | | cosine_recall@3 | 0.241 | | cosine_recall@5 | 0.3266 | | cosine_recall@10 | 0.5048 | | cosine_ndcg@10 | 0.9197 | | cosine_mrr@10 | 0.9599 | | **cosine_map@100** | **0.7369** | #### Information Retrieval * Dataset: `dim_128` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.9487 | | cosine_accuracy@3 | 0.9728 | | cosine_accuracy@5 | 0.9758 | | cosine_accuracy@10 | 0.9941 | | cosine_precision@1 | 0.9487 | | cosine_precision@3 | 0.8726 | | cosine_precision@5 | 0.8555 | | cosine_precision@10 | 0.7964 | | cosine_recall@1 | 0.1552 | | cosine_recall@3 | 0.2412 | | cosine_recall@5 | 0.327 | | cosine_recall@10 | 0.5049 | | cosine_ndcg@10 | 0.9205 | | cosine_mrr@10 | 0.9628 | | **cosine_map@100** | **0.7339** | #### Information Retrieval * Dataset: `dim_64` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.9496 | | cosine_accuracy@3 | 0.9744 | | cosine_accuracy@5 | 0.9778 | | cosine_accuracy@10 | 0.9944 | | cosine_precision@1 | 0.9496 | | cosine_precision@3 | 0.8755 | | cosine_precision@5 | 0.8595 | | cosine_precision@10 | 0.7983 | | cosine_recall@1 | 0.1551 | | cosine_recall@3 | 0.2415 | | cosine_recall@5 | 0.3276 | | cosine_recall@10 | 0.5053 | | cosine_ndcg@10 | 0.9227 | | cosine_mrr@10 | 0.964 | | **cosine_map@100** | **0.7304** | #### Information Retrieval * Dataset: `dim_32` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.9451 | | cosine_accuracy@3 | 0.9731 | | cosine_accuracy@5 | 0.9784 | | cosine_accuracy@10 | 0.9948 | | cosine_precision@1 | 0.9451 | | cosine_precision@3 | 0.868 | | cosine_precision@5 | 0.8521 | | cosine_precision@10 | 0.7814 | | cosine_recall@1 | 0.1548 | | cosine_recall@3 | 0.24 | | cosine_recall@5 | 0.3251 | | cosine_recall@10 | 0.4972 | | cosine_ndcg@10 | 0.909 | | cosine_mrr@10 | 0.9617 | | **cosine_map@100** | **0.7086** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 75,086 training samples * Columns: <code>positive</code> and <code>anchor</code> * Approximate statistics based on the first 1000 samples: | | positive | anchor | |:--------|:-------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 20 tokens</li><li>mean: 150.64 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 15.44 tokens</li><li>max: 22 tokens</li></ul> | * Samples: | positive | anchor | |:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------| | <code>Tag: Beef and Broccoli<br><br>Spatula required to cook beef and broccoli based on different serving sizes: Serving 1 - flipping spatula, Serving 2 - flipping spatula, Serving 3 - flipping spatula, Serving 4 - flipping spatula<br><br>Recipes similar to beef and broccoli: Pepper Steak Skillet, Beef & Arugula Stir-Fry, Sticky Beef & Zucchini, Beef Kaldereta, Garlic Butter Steak Bites, Roasted Broccoli & Carrots, Beef Skillet Lasagna, Beef Stew, Keto Beef & Cabbage<br><br>Garnishing tips for Beef and Broccoli: Best served on it's own or on top of hot rice with chopped scallions!<br><br>A small description of Beef and Broccoli: Stir fried broccoli and tender beef strips stir-fried in a rich savory sauce.<br><br>For Beef and Broccoli, these dietary tags go well with it: dinner, contains soy, meat recipes, asian american cuisine, lunch, american cuisine, beef recipes, asian cuisine, chinese cuisine, hearty recipes, rice recipes, protein rich recipes, non vegetarian, saucy recipes, stir fry recipes, healthy recipes</code> | <code>How do you describe Beef and Broccoli?</code> | | <code>Tag: Beef and Broccoli<br><br>A small description of Beef and Broccoli: Stir fried broccoli and tender beef strips stir-fried in a rich savory sauce.</code> | <code>How do you describe Beef and Broccoli?</code> | | <code>Tag: Beef and Broccoli<br><br>Garnishing tips for Beef and Broccoli: Best served on it's own or on top of hot rice with chopped scallions!<br><br>Preparations (ingredient) needed to cook Beef and Broccoli:<br>Marinate the beef slices with soy sauce and bakig soda for at least 20 minutes. Use rib-eye steak for best results. Alternatively you can also use flank steak or skirt steak.<br><br>Recipes similar to beef and broccoli: Pepper Steak Skillet, Beef & Arugula Stir-Fry, Sticky Beef & Zucchini, Beef Kaldereta, Garlic Butter Steak Bites, Roasted Broccoli & Carrots, Beef Skillet Lasagna, Beef Stew, Keto Beef & Cabbage<br><br>Cook time of Beef and Broccoli based on different serving sizes: Serving 1 - 20 mins, Serving 2 - 25 mins, Serving 3 - 30 mins, Serving 4 - 35 mins<br><br>Macro ingredients required to cook Beef and Broccoli:<br>Broccoli, Soy Sauce, Ribeye Steak, Soy Sauce, Garlic, Scallion, Ginger, Baking Soda</code> | <code>What are some classic garnishes for Beef and Broccoli?</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 384, 256, 128, 64, 32 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `gradient_accumulation_steps`: 16 - `learning_rate`: 2e-05 - `num_train_epochs`: 100 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.1 - `bf16`: True - `tf32`: True - `load_best_model_at_end`: True - `optim`: adamw_torch_fused - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 16 - `eval_accumulation_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 100 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: True - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_32_cosine_map@100 | dim_384_cosine_map@100 | dim_64_cosine_map@100 | |:------:|:----:|:-------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:|:---------------------:| | 0.0682 | 10 | 11.9607 | - | - | - | - | - | | 0.1363 | 20 | 12.1342 | - | - | - | - | - | | 0.2045 | 30 | 11.8794 | - | - | - | - | - | | 0.2727 | 40 | 11.838 | - | - | - | - | - | | 0.3409 | 50 | 11.9675 | - | - | - | - | - | | 0.4090 | 60 | 11.5518 | - | - | - | - | - | | 0.4772 | 70 | 11.3832 | - | - | - | - | - | | 0.5454 | 80 | 11.2516 | - | - | - | - | - | | 0.6135 | 90 | 11.1272 | - | - | - | - | - | | 0.6817 | 100 | 10.9423 | - | - | - | - | - | | 0.7499 | 110 | 5.0611 | - | - | - | - | - | | 0.8181 | 120 | 0.2761 | - | - | - | - | - | | 0.8862 | 130 | 5.5841 | - | - | - | - | - | | 0.9544 | 140 | 5.453 | - | - | - | - | - | | 1.0021 | 147 | - | 0.7339 | 0.7369 | 0.7086 | 0.7389 | 0.7304 | ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.0.1 - Transformers: 4.41.2 - PyTorch: 2.1.2+cu121 - Accelerate: 0.31.0 - Datasets: 2.19.1 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
Grayx/john_paul_van_damme_66
Grayx
2024-07-01T13:38:18Z
0
0
null
[ "region:us" ]
null
2024-07-01T13:38:06Z
Entry not found
anushaporwal/wav2vec2-common_voice-tr-demo-mini
anushaporwal
2024-07-01T13:56:22Z
0
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_16_0", "generated_from_trainer", "tr", "dataset:common_voice_16_0", "base_model:facebook/wav2vec2-large-xlsr-53", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-07-01T13:38:15Z
--- language: - tr license: apache-2.0 base_model: facebook/wav2vec2-large-xlsr-53 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_16_0 - generated_from_trainer datasets: - common_voice_16_0 metrics: - wer model-index: - name: wav2vec2-common_voice-tr-demo-mini results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: MOZILLA-FOUNDATION/COMMON_VOICE_16_0 - TR type: common_voice_16_0 config: tr split: test[0:250] args: 'Config: tr, Training split: train[0:3000], Eval split: test[0:250]' metrics: - name: Wer type: wer value: 0.9382716049382716 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-common_voice-tr-demo-mini This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the MOZILLA-FOUNDATION/COMMON_VOICE_16_0 - TR dataset. It achieves the following results on the evaluation set: - Loss: 0.9823 - Wer: 0.9383 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | No log | 0.5333 | 100 | 4.0238 | 1.0 | | No log | 1.0667 | 200 | 3.2451 | 1.0 | | No log | 1.6 | 300 | 2.9997 | 1.0 | | No log | 2.1333 | 400 | 1.4256 | 1.0054 | | 4.5926 | 2.6667 | 500 | 1.2465 | 0.9730 | ### Framework versions - Transformers 4.42.0.dev0 - Pytorch 2.3.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
Grayx/john_paul_van_damme_67
Grayx
2024-07-01T13:39:05Z
0
0
null
[ "region:us" ]
null
2024-07-01T13:38:53Z
Entry not found
ayush7/outputs
ayush7
2024-07-01T15:04:18Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:microsoft/Phi-3-mini-128k-instruct", "license:mit", "region:us" ]
null
2024-07-01T13:38:59Z
--- base_model: microsoft/Phi-3-mini-128k-instruct datasets: - generator library_name: peft license: mit tags: - trl - sft - generated_from_trainer model-index: - name: outputs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # outputs This model is a fine-tuned version of [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 2 - eval_batch_size: 8 - seed: 0 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 5 ### Training results ### Framework versions - PEFT 0.11.1 - Transformers 4.42.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
Grayx/john_paul_van_damme_68
Grayx
2024-07-01T13:39:24Z
0
0
null
[ "region:us" ]
null
2024-07-01T13:39:13Z
Entry not found
Adi-0-0-Gupta/Embedding-v2-256
Adi-0-0-Gupta
2024-07-01T13:40:01Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:75086", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:1705.00652", "model-index", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
2024-07-01T13:39:58Z
--- datasets: [] language: [] library_name: sentence-transformers metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:75086 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss widget: - source_sentence: 'Tag: Vegetable Pulao Preparation time (ingredient) for Vegetable Pulao based on different serving sizes: Serving 1 - 15 mins, Serving 2 - 18 mins, Serving 3 - 20 mins, Serving 4 - 22 mins' sentences: - What diet tags are relevant to Kimchi Fried Rice (Chicken)? - What dietary classifications are suitable for Chicken & Broccoli Alfredo? - What is the time required to prepare ingredients for Vegetable Pulao? - source_sentence: "Tag: Vegetable Pulao\n\nMacro ingredients required to cook Vegetable\ \ Pulao:\nOrange Carrot, French Bean, Cauliflower, Plain Unsweetened Yogurt, Red\ \ Onion, Clove, Bay Leaf, Green Cardamom, Ginger-Garlic Paste, Green Chili Pepper,\ \ Cinnamon, Basmati Rice, Fresh Cilantro, Fresh Mint\n\nPreparations (ingredient)\ \ needed to cook Vegetable Pulao:\nWash the rice twice, and then soak it for at\ \ least 20 minutes. Drain the water and transfer the rice into the macro container.\ \ \nMix the yogurt, whole spices, green chili, and ginger garlic paste with the\ \ chopped veggies in a separate bowl, and then transfer it to the macro container.\ \ Please make sure to use plain yogurt. If using greek yogurt, use half the quantity\ \ of plain yogurt.\n\nTotal calories (nutritional energy) in Vegetable Pulao based\ \ on different serving sizes: Serving 1 - 300 mins, Serving 2 - 500 mins, Serving\ \ 3 - 700 mins, Serving 4 - 900 mins" sentences: - Can you give me some insights into Scrambled Eggs with Veggies? - How should the ingredients for Chicken Pad Thai be prepared? - What’s the calorie figure for Vegetable Pulao? - source_sentence: 'Tag: Chicken Pad Thai Spatula required to cook chicken pad thai based on different serving sizes: Serving 1 - noodle spatula, Serving 2 - noodle spatula, Serving 3 - noodle spatula, Serving 4 - noodle spatula' sentences: - What are the detailed cooking instructions for Rava Upma? - What’s the best way to prep ingredients for Teriyaki Tofu? - What kind of spatula do you need for Chicken Pad Thai? - source_sentence: 'Tag: Kimchi Fried Rice (Chicken) A small description of Kimchi Fried Rice (Chicken): Kimchi fried rice is made with kimchi, spicy gochujang, and garlic. The umami flavors from the kimchi juice balance beautifully with the spicy gochujang sauce and soy sauce, also creating that beautiful red-tinted color. ' sentences: - What spatula would you recommend for Vegetable Pulao? - How can I improve the presentation of Chicken Pad Thai with garnishes? - How would you describe the dish Kimchi Fried Rice (Chicken)? - source_sentence: 'Tag: Rava Upma Cook time of Rava Upma based on different serving sizes: Serving 1 - 26 mins, Serving 2 - 26 mins, Serving 3 - 28 mins, Serving 4 - 30 mins Preparation time (ingredient) for Rava Upma based on different serving sizes: Serving 1 - 6 mins, Serving 2 - 7 mins, Serving 3 - 8 mins, Serving 4 - 10 mins' sentences: - How long does it take to prepare ingredients for Rava Upma? - What are some final touch tips for Rava Upma? - How would you summarize Mac & Cheese? model-index: - name: SentenceTransformer results: - task: type: information-retrieval name: Information Retrieval dataset: name: dim 384 type: dim_384 metrics: - type: cosine_accuracy@1 value: 0.9431019051272216 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.9684183608234241 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.9704641350210971 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9943741209563994 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.9431019051272216 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.8694540340109961 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.8533691343817927 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.795371435877765 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.15481705915468094 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.24073986604144076 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.32643031270609696 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.5043459566396565 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.918622245854727 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.9581816761141663 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.7389009694677152 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 256 type: dim_256 metrics: - type: cosine_accuracy@1 value: 0.9448919575501854 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.9704641350210971 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.97250990921877 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9942462600690449 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.9448919575501854 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.8702211993351234 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.8534458509142053 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.796407109065337 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.15493928650775676 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.24095543792672336 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.32656065320980354 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.5047541251867395 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.9196805870274929 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.9598828347773509 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.7369163574373101 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 128 type: dim_128 metrics: - type: cosine_accuracy@1 value: 0.9487277841708222 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.9727656309934791 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.9758342922899885 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9941183991816903 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.9487277841708222 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.8725653156032902 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.8555427694668201 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.7964454673315434 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.15515249707637835 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.24121211958393993 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.3269542903263234 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.5049379410293565 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.9205498440081438 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.9627833488592988 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.7339144971303314 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 64 type: dim_64 metrics: - type: cosine_accuracy@1 value: 0.949622810382304 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.9744278225290883 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.9777522056003068 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9943741209563994 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.949622810382304 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.8755061160124451 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.8594553126198696 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.7982610919319781 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.15511449448310274 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.2414942027072444 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.32761337610101815 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.5053322703457185 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.9226639618734229 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.9639710344351698 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.7303591002200212 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 32 type: dim_32 metrics: - type: cosine_accuracy@1 value: 0.9451476793248945 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.9731492136555427 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.9783915100370797 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9947577036184632 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.9451476793248945 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.8679623236585261 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.8521416698631887 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.7814473852448537 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.15477631695655358 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.2399798683039478 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.3251298048319623 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.49721034132531955 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.9089797806768267 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.9616667478481831 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.708556549543554 name: Cosine Map@100 --- # SentenceTransformer This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 384 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the πŸ€— Hub model = SentenceTransformer("Adi-0-0-Gupta/Embedding-v2-256") # Run inference sentences = [ 'Tag: Rava Upma\n\nCook time of Rava Upma based on different serving sizes: Serving 1 - 26 mins, Serving 2 - 26 mins, Serving 3 - 28 mins, Serving 4 - 30 mins\n\nPreparation time (ingredient) for Rava Upma based on different serving sizes: Serving 1 - 6 mins, Serving 2 - 7 mins, Serving 3 - 8 mins, Serving 4 - 10 mins', 'How long does it take to prepare ingredients for Rava Upma?', 'What are some final touch tips for Rava Upma?', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Dataset: `dim_384` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.9431 | | cosine_accuracy@3 | 0.9684 | | cosine_accuracy@5 | 0.9705 | | cosine_accuracy@10 | 0.9944 | | cosine_precision@1 | 0.9431 | | cosine_precision@3 | 0.8695 | | cosine_precision@5 | 0.8534 | | cosine_precision@10 | 0.7954 | | cosine_recall@1 | 0.1548 | | cosine_recall@3 | 0.2407 | | cosine_recall@5 | 0.3264 | | cosine_recall@10 | 0.5043 | | cosine_ndcg@10 | 0.9186 | | cosine_mrr@10 | 0.9582 | | **cosine_map@100** | **0.7389** | #### Information Retrieval * Dataset: `dim_256` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.9449 | | cosine_accuracy@3 | 0.9705 | | cosine_accuracy@5 | 0.9725 | | cosine_accuracy@10 | 0.9942 | | cosine_precision@1 | 0.9449 | | cosine_precision@3 | 0.8702 | | cosine_precision@5 | 0.8534 | | cosine_precision@10 | 0.7964 | | cosine_recall@1 | 0.1549 | | cosine_recall@3 | 0.241 | | cosine_recall@5 | 0.3266 | | cosine_recall@10 | 0.5048 | | cosine_ndcg@10 | 0.9197 | | cosine_mrr@10 | 0.9599 | | **cosine_map@100** | **0.7369** | #### Information Retrieval * Dataset: `dim_128` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.9487 | | cosine_accuracy@3 | 0.9728 | | cosine_accuracy@5 | 0.9758 | | cosine_accuracy@10 | 0.9941 | | cosine_precision@1 | 0.9487 | | cosine_precision@3 | 0.8726 | | cosine_precision@5 | 0.8555 | | cosine_precision@10 | 0.7964 | | cosine_recall@1 | 0.1552 | | cosine_recall@3 | 0.2412 | | cosine_recall@5 | 0.327 | | cosine_recall@10 | 0.5049 | | cosine_ndcg@10 | 0.9205 | | cosine_mrr@10 | 0.9628 | | **cosine_map@100** | **0.7339** | #### Information Retrieval * Dataset: `dim_64` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.9496 | | cosine_accuracy@3 | 0.9744 | | cosine_accuracy@5 | 0.9778 | | cosine_accuracy@10 | 0.9944 | | cosine_precision@1 | 0.9496 | | cosine_precision@3 | 0.8755 | | cosine_precision@5 | 0.8595 | | cosine_precision@10 | 0.7983 | | cosine_recall@1 | 0.1551 | | cosine_recall@3 | 0.2415 | | cosine_recall@5 | 0.3276 | | cosine_recall@10 | 0.5053 | | cosine_ndcg@10 | 0.9227 | | cosine_mrr@10 | 0.964 | | **cosine_map@100** | **0.7304** | #### Information Retrieval * Dataset: `dim_32` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.9451 | | cosine_accuracy@3 | 0.9731 | | cosine_accuracy@5 | 0.9784 | | cosine_accuracy@10 | 0.9948 | | cosine_precision@1 | 0.9451 | | cosine_precision@3 | 0.868 | | cosine_precision@5 | 0.8521 | | cosine_precision@10 | 0.7814 | | cosine_recall@1 | 0.1548 | | cosine_recall@3 | 0.24 | | cosine_recall@5 | 0.3251 | | cosine_recall@10 | 0.4972 | | cosine_ndcg@10 | 0.909 | | cosine_mrr@10 | 0.9617 | | **cosine_map@100** | **0.7086** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 75,086 training samples * Columns: <code>positive</code> and <code>anchor</code> * Approximate statistics based on the first 1000 samples: | | positive | anchor | |:--------|:-------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 20 tokens</li><li>mean: 150.64 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 15.44 tokens</li><li>max: 22 tokens</li></ul> | * Samples: | positive | anchor | |:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------| | <code>Tag: Beef and Broccoli<br><br>Spatula required to cook beef and broccoli based on different serving sizes: Serving 1 - flipping spatula, Serving 2 - flipping spatula, Serving 3 - flipping spatula, Serving 4 - flipping spatula<br><br>Recipes similar to beef and broccoli: Pepper Steak Skillet, Beef & Arugula Stir-Fry, Sticky Beef & Zucchini, Beef Kaldereta, Garlic Butter Steak Bites, Roasted Broccoli & Carrots, Beef Skillet Lasagna, Beef Stew, Keto Beef & Cabbage<br><br>Garnishing tips for Beef and Broccoli: Best served on it's own or on top of hot rice with chopped scallions!<br><br>A small description of Beef and Broccoli: Stir fried broccoli and tender beef strips stir-fried in a rich savory sauce.<br><br>For Beef and Broccoli, these dietary tags go well with it: dinner, contains soy, meat recipes, asian american cuisine, lunch, american cuisine, beef recipes, asian cuisine, chinese cuisine, hearty recipes, rice recipes, protein rich recipes, non vegetarian, saucy recipes, stir fry recipes, healthy recipes</code> | <code>How do you describe Beef and Broccoli?</code> | | <code>Tag: Beef and Broccoli<br><br>A small description of Beef and Broccoli: Stir fried broccoli and tender beef strips stir-fried in a rich savory sauce.</code> | <code>How do you describe Beef and Broccoli?</code> | | <code>Tag: Beef and Broccoli<br><br>Garnishing tips for Beef and Broccoli: Best served on it's own or on top of hot rice with chopped scallions!<br><br>Preparations (ingredient) needed to cook Beef and Broccoli:<br>Marinate the beef slices with soy sauce and bakig soda for at least 20 minutes. Use rib-eye steak for best results. Alternatively you can also use flank steak or skirt steak.<br><br>Recipes similar to beef and broccoli: Pepper Steak Skillet, Beef & Arugula Stir-Fry, Sticky Beef & Zucchini, Beef Kaldereta, Garlic Butter Steak Bites, Roasted Broccoli & Carrots, Beef Skillet Lasagna, Beef Stew, Keto Beef & Cabbage<br><br>Cook time of Beef and Broccoli based on different serving sizes: Serving 1 - 20 mins, Serving 2 - 25 mins, Serving 3 - 30 mins, Serving 4 - 35 mins<br><br>Macro ingredients required to cook Beef and Broccoli:<br>Broccoli, Soy Sauce, Ribeye Steak, Soy Sauce, Garlic, Scallion, Ginger, Baking Soda</code> | <code>What are some classic garnishes for Beef and Broccoli?</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 384, 256, 128, 64, 32 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `gradient_accumulation_steps`: 16 - `learning_rate`: 2e-05 - `num_train_epochs`: 100 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.1 - `bf16`: True - `tf32`: True - `load_best_model_at_end`: True - `optim`: adamw_torch_fused - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 16 - `eval_accumulation_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 100 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: True - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_32_cosine_map@100 | dim_384_cosine_map@100 | dim_64_cosine_map@100 | |:------:|:----:|:-------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:|:---------------------:| | 0.0682 | 10 | 11.9607 | - | - | - | - | - | | 0.1363 | 20 | 12.1342 | - | - | - | - | - | | 0.2045 | 30 | 11.8794 | - | - | - | - | - | | 0.2727 | 40 | 11.838 | - | - | - | - | - | | 0.3409 | 50 | 11.9675 | - | - | - | - | - | | 0.4090 | 60 | 11.5518 | - | - | - | - | - | | 0.4772 | 70 | 11.3832 | - | - | - | - | - | | 0.5454 | 80 | 11.2516 | - | - | - | - | - | | 0.6135 | 90 | 11.1272 | - | - | - | - | - | | 0.6817 | 100 | 10.9423 | - | - | - | - | - | | 0.7499 | 110 | 5.0611 | - | - | - | - | - | | 0.8181 | 120 | 0.2761 | - | - | - | - | - | | 0.8862 | 130 | 5.5841 | - | - | - | - | - | | 0.9544 | 140 | 5.453 | - | - | - | - | - | | 1.0021 | 147 | - | 0.7339 | 0.7369 | 0.7086 | 0.7389 | 0.7304 | ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.0.1 - Transformers: 4.41.2 - PyTorch: 2.1.2+cu121 - Accelerate: 0.31.0 - Datasets: 2.19.1 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
itay-nakash/model_0e1a108b92_sweep_exalted-totem-1157
itay-nakash
2024-07-01T13:40:16Z
0
0
null
[ "region:us" ]
null
2024-07-01T13:40:16Z
Entry not found
OnFinanceAI/llama-3-8b-analyst-qa-instr
OnFinanceAI
2024-07-01T14:04:28Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-07-01T13:40:23Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
summertime0/nashk1
summertime0
2024-07-01T13:40:45Z
0
0
null
[ "region:us" ]
null
2024-07-01T13:40:44Z
Entry not found
Meziane/question_answering_T5_policy_qa_4
Meziane
2024-07-01T13:45:35Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "question-answering", "endpoints_compatible", "text-generation-inference", "region:us" ]
question-answering
2024-07-01T13:41:48Z
Entry not found
pursuitofds/finetuned_qa_llama3_8b_qlora_model_withfull_qv
pursuitofds
2024-07-01T13:43:21Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:pursuitofds/finetuned_qa_llama3_8b_qlora_model_withfull_qv", "region:us" ]
null
2024-07-01T13:43:13Z
--- library_name: peft base_model: pursuitofds/finetuned_qa_llama3_8b_qlora_model_withfull_qv --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
vbafnaa/whisper-small-hi
vbafnaa
2024-07-01T13:43:36Z
0
0
null
[ "region:us" ]
null
2024-07-01T13:43:36Z
Entry not found
taylor001/Mistral_01
taylor001
2024-07-01T13:44:26Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-07-01T13:44:11Z
--- base_model: unsloth/mistral-7b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl --- # Uploaded model - **Developed by:** taylor001 - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
rinogrego/GritLM-BioMistral-7B-8-bit
rinogrego
2024-07-01T13:48:45Z
0
0
null
[ "region:us" ]
null
2024-07-01T13:48:45Z
Entry not found
henrik-dra/paligemma-ft-svhn
henrik-dra
2024-07-01T15:54:16Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-07-01T13:49:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ArisA1/TitodobleP
ArisA1
2024-07-01T13:51:50Z
0
0
null
[ "license:openrail", "region:us" ]
null
2024-07-01T13:50:04Z
--- license: openrail ---
vatsaldin/distilbert-base-uncased-finetuned-ner
vatsaldin
2024-07-01T13:50:27Z
0
0
null
[ "region:us" ]
null
2024-07-01T13:50:27Z
Entry not found
rezaakb/reward_modeling_anthropic_hh
rezaakb
2024-07-01T13:50:46Z
0
0
null
[ "safetensors", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-8B", "license:llama3", "region:us" ]
null
2024-07-01T13:50:43Z
--- license: llama3 base_model: meta-llama/Meta-Llama-3-8B tags: - generated_from_trainer model-index: - name: reward_modeling_anthropic_hh results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # reward_modeling_anthropic_hh This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.41e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 ### Training results ### Framework versions - Transformers 4.35.0 - Pytorch 2.0.1 - Datasets 2.14.7 - Tokenizers 0.14.1
PhucDanh/Bartpho-fine-tuning-on-UIT-Course-information
PhucDanh
2024-07-01T14:02:57Z
0
0
transformers
[ "transformers", "safetensors", "mbart", "question-answering", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2024-07-01T13:51:08Z
--- license: mit ---
fontesaurelio/fontes
fontesaurelio
2024-07-01T13:52:26Z
0
0
null
[ "region:us" ]
null
2024-07-01T13:52:25Z
Entry not found
DipeshChaudhary/ShareGPTChatBot-Counselchat1
DipeshChaudhary
2024-07-02T12:17:03Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-07-01T13:53:15Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-Instruct-bnb-4bit --- # To Use This Model # STEP 1:* - Installs Unsloth, Xformers (Flash Attention) and all other packages! according to your environments and GPU - To install Unsloth on your own computer, follow the installation instructions on our Github page : [LINK IS HERE](https://github.com/unslothai/unsloth#installation-instructions---conda) # STEP 2: Now Follow the CODES **LOAD THE MODEL** ``` from unsloth import FastLanguageModel ``` ``` import torch max_seq_length = 2048 # Choose any! We auto support RoPE Scaling internally! dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+ load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False. from transformers import AutoTokenizer ``` ``` model, tokenizer = FastLanguageModel.from_pretrained( model_name="DipeshChaudhary/ShareGPTChatBot-Counselchat1", # Your fine-tuned model max_seq_length=max_seq_length, dtype=dtype, load_in_4bit=load_in_4bit, ) ``` # We now use the Llama-3 format for conversation style finetunes. We use Open Assistant conversations in ShareGPT style. **We use our get_chat_template function to get the correct chat template. They support zephyr, chatml, mistral, llama, alpaca, vicuna, vicuna_old and their own optimized unsloth template** ``` from unsloth.chat_templates import get_chat_template tokenizer = get_chat_template( tokenizer, chat_template = "llama-3", # Supports zephyr, chatml, mistral, llama, alpaca, vicuna, vicuna_old, unsloth mapping = {"role" : "from", "content" : "value", "user" : "human", "assistant" : "gpt"}, # ShareGPT style ) ``` ## FOR ACTUAL INFERENCE ``` FastLanguageModel.for_inference(model) # Enable native 2x faster inference messages = [ {"from": "human", "value": "I'm worry about my exam."}, ] inputs = tokenizer.apply_chat_template( messages, tokenize = True, add_generation_prompt = True, # Must add for generation return_tensors = "pt", ).to("cuda") from transformers import TextStreamer text_streamer = TextStreamer(tokenizer) x= model.generate(input_ids = inputs, streamer = text_streamer, max_new_tokens = 128, use_cache = True) ``` # Uploaded model - **Developed by:** DipeshChaudhary - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
habulaj/1867918466
habulaj
2024-07-01T13:54:12Z
0
0
null
[ "region:us" ]
null
2024-07-01T13:54:08Z
Entry not found
gokulsrinivasagan/gpt_train_2_768
gokulsrinivasagan
2024-07-02T16:03:53Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "dataset:gokuls/wiki_book_corpus_raw_dataset_tiny", "base_model:openai-community/gpt2", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-07-01T13:54:20Z
--- license: mit base_model: openai-community/gpt2 tags: - generated_from_trainer datasets: - gokuls/wiki_book_corpus_raw_dataset_tiny metrics: - accuracy model-index: - name: gpt_train_2_768 results: - task: name: Causal Language Modeling type: text-generation dataset: name: gokuls/wiki_book_corpus_raw_dataset_tiny type: gokuls/wiki_book_corpus_raw_dataset_tiny metrics: - name: Accuracy type: accuracy value: 0.10393614847954215 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt_train_2_768 This model is a fine-tuned version of [openai-community/gpt2](https://huggingface.co/openai-community/gpt2) on the gokuls/wiki_book_corpus_raw_dataset_tiny dataset. It achieves the following results on the evaluation set: - Loss: 7.4883 - Accuracy: 0.1039 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 10.9688 | 0.0001 | 1 | 10.9688 | 0.0000 | | 10.9609 | 0.0002 | 2 | 10.9688 | 0.0000 | | 10.9609 | 0.0003 | 3 | 10.9688 | 0.0000 | | 10.9609 | 0.0004 | 4 | 10.9688 | 0.0000 | | 10.9609 | 0.0005 | 5 | 10.9688 | 0.0000 | | 10.9688 | 0.0006 | 6 | 10.9688 | 0.0000 | | 10.9609 | 0.0007 | 7 | 10.9688 | 0.0000 | | 10.9609 | 0.0008 | 8 | 10.9688 | 0.0000 | | 10.9688 | 0.0009 | 9 | 10.9688 | 0.0000 | | 10.9531 | 0.0010 | 10 | 10.9688 | 0.0000 | | 10.9688 | 0.0011 | 11 | 10.9688 | 0.0000 | | 10.9688 | 0.0012 | 12 | 10.9688 | 0.0000 | | 10.9531 | 0.0013 | 13 | 10.9688 | 0.0000 | | 10.9609 | 0.0014 | 14 | 10.9688 | 0.0000 | | 10.9688 | 0.0015 | 15 | 10.9688 | 0.0000 | | 10.9766 | 0.0015 | 16 | 10.9688 | 0.0000 | | 10.9688 | 0.0016 | 17 | 10.9688 | 0.0000 | | 10.9609 | 0.0017 | 18 | 10.8828 | 0.0007 | | 10.8906 | 0.0018 | 19 | 10.8047 | 0.0051 | | 10.8359 | 0.0019 | 20 | 10.7188 | 0.0112 | | 10.75 | 0.0020 | 21 | 10.6484 | 0.0175 | | 10.6719 | 0.0021 | 22 | 10.5781 | 0.0280 | | 10.6172 | 0.0022 | 23 | 10.5 | 0.0392 | | 10.5391 | 0.0023 | 24 | 10.4375 | 0.0447 | | 10.5078 | 0.0024 | 25 | 10.3828 | 0.0478 | | 10.4609 | 0.0025 | 26 | 10.3125 | 0.0499 | | 10.3906 | 0.0026 | 27 | 10.2656 | 0.0511 | | 10.3281 | 0.0027 | 28 | 10.2109 | 0.0521 | | 10.2656 | 0.0028 | 29 | 10.1641 | 0.0531 | | 10.25 | 0.0029 | 30 | 10.1172 | 0.0537 | | 10.2031 | 0.0030 | 31 | 10.0703 | 0.0544 | | 10.1641 | 0.0031 | 32 | 10.0312 | 0.0552 | | 10.125 | 0.0032 | 33 | 9.9922 | 0.0558 | | 10.0859 | 0.0033 | 34 | 9.9609 | 0.0562 | | 10.0391 | 0.0034 | 35 | 9.9219 | 0.0566 | | 10.0156 | 0.0035 | 36 | 9.8906 | 0.0568 | | 9.9609 | 0.0036 | 37 | 9.8594 | 0.0567 | | 9.9141 | 0.0037 | 38 | 9.8359 | 0.0566 | | 9.875 | 0.0038 | 39 | 9.8047 | 0.0568 | | 9.8672 | 0.0039 | 40 | 9.7812 | 0.0569 | | 9.8438 | 0.0040 | 41 | 9.7578 | 0.0568 | | 9.7969 | 0.0041 | 42 | 9.7344 | 0.0565 | | 9.8203 | 0.0042 | 43 | 9.7109 | 0.0564 | | 9.7891 | 0.0043 | 44 | 9.6875 | 0.0564 | | 9.7031 | 0.0044 | 45 | 9.6719 | 0.0566 | | 9.7344 | 0.0045 | 46 | 9.6484 | 0.0569 | | 9.7266 | 0.0046 | 47 | 9.6328 | 0.0573 | | 9.7031 | 0.0046 | 48 | 9.6172 | 0.0579 | | 9.7109 | 0.0047 | 49 | 9.6016 | 0.0585 | | 9.6406 | 0.0048 | 50 | 9.5781 | 0.0591 | | 9.6797 | 0.0049 | 51 | 9.5625 | 0.0597 | | 9.6328 | 0.0050 | 52 | 9.5469 | 0.0605 | | 9.6172 | 0.0051 | 53 | 9.5312 | 0.0612 | | 9.6172 | 0.0052 | 54 | 9.5234 | 0.0615 | | 9.5703 | 0.0053 | 55 | 9.5078 | 0.0617 | | 9.5781 | 0.0054 | 56 | 9.4922 | 0.0618 | | 9.5938 | 0.0055 | 57 | 9.4766 | 0.0620 | | 9.5391 | 0.0056 | 58 | 9.4688 | 0.0621 | | 9.4922 | 0.0057 | 59 | 9.4531 | 0.0620 | | 9.4688 | 0.0058 | 60 | 9.4375 | 0.0620 | | 9.4922 | 0.0059 | 61 | 9.4297 | 0.0620 | | 9.4609 | 0.0060 | 62 | 9.4141 | 0.0620 | | 9.4297 | 0.0061 | 63 | 9.4062 | 0.0620 | | 9.4844 | 0.0062 | 64 | 9.3906 | 0.0620 | | 9.4531 | 0.0063 | 65 | 9.3828 | 0.0622 | | 9.4375 | 0.0064 | 66 | 9.3672 | 0.0625 | | 9.4375 | 0.0065 | 67 | 9.3594 | 0.0628 | | 9.3984 | 0.0066 | 68 | 9.3438 | 0.0630 | | 9.4062 | 0.0067 | 69 | 9.3359 | 0.0632 | | 9.3984 | 0.0068 | 70 | 9.3203 | 0.0633 | | 9.4375 | 0.0069 | 71 | 9.3125 | 0.0633 | | 9.3828 | 0.0070 | 72 | 9.3047 | 0.0634 | | 9.3594 | 0.0071 | 73 | 9.2891 | 0.0634 | | 9.3438 | 0.0072 | 74 | 9.2812 | 0.0634 | | 9.3672 | 0.0073 | 75 | 9.2734 | 0.0634 | | 9.3125 | 0.0074 | 76 | 9.2578 | 0.0634 | | 9.3047 | 0.0075 | 77 | 9.25 | 0.0633 | | 9.2969 | 0.0076 | 78 | 9.2422 | 0.0632 | | 9.2891 | 0.0077 | 79 | 9.2266 | 0.0631 | | 9.2812 | 0.0077 | 80 | 9.2188 | 0.0631 | | 9.2656 | 0.0078 | 81 | 9.2109 | 0.0632 | | 9.2422 | 0.0079 | 82 | 9.2031 | 0.0633 | | 9.2656 | 0.0080 | 83 | 9.1875 | 0.0635 | | 9.25 | 0.0081 | 84 | 9.1797 | 0.0637 | | 9.2344 | 0.0082 | 85 | 9.1719 | 0.0639 | | 9.2266 | 0.0083 | 86 | 9.1562 | 0.0640 | | 9.25 | 0.0084 | 87 | 9.1484 | 0.0641 | | 9.1406 | 0.0085 | 88 | 9.1406 | 0.0641 | | 9.1562 | 0.0086 | 89 | 9.1328 | 0.0642 | | 9.2031 | 0.0087 | 90 | 9.1172 | 0.0641 | | 9.1406 | 0.0088 | 91 | 9.1094 | 0.0642 | | 9.1406 | 0.0089 | 92 | 9.1016 | 0.0643 | | 9.1406 | 0.0090 | 93 | 9.0938 | 0.0644 | | 9.1328 | 0.0091 | 94 | 9.0781 | 0.0644 | | 9.125 | 0.0092 | 95 | 9.0703 | 0.0645 | | 9.1016 | 0.0093 | 96 | 9.0625 | 0.0646 | | 9.125 | 0.0094 | 97 | 9.0547 | 0.0648 | | 9.0625 | 0.0095 | 98 | 9.0391 | 0.0652 | | 9.0859 | 0.0096 | 99 | 9.0312 | 0.0655 | | 9.0547 | 0.0097 | 100 | 9.0234 | 0.0657 | | 9.0547 | 0.0098 | 101 | 9.0156 | 0.0658 | | 9.0625 | 0.0099 | 102 | 9.0078 | 0.0659 | | 9.0547 | 0.0100 | 103 | 8.9922 | 0.0661 | | 9.0156 | 0.0101 | 104 | 8.9844 | 0.0662 | | 9.0391 | 0.0102 | 105 | 8.9766 | 0.0664 | | 9.0234 | 0.0103 | 106 | 8.9688 | 0.0664 | | 9.0234 | 0.0104 | 107 | 8.9609 | 0.0664 | | 8.9766 | 0.0105 | 108 | 8.9453 | 0.0664 | | 8.9922 | 0.0106 | 109 | 8.9375 | 0.0665 | | 8.9453 | 0.0107 | 110 | 8.9297 | 0.0665 | | 8.9609 | 0.0108 | 111 | 8.9219 | 0.0664 | | 8.9766 | 0.0108 | 112 | 8.9141 | 0.0664 | | 8.9844 | 0.0109 | 113 | 8.8984 | 0.0666 | | 8.9453 | 0.0110 | 114 | 8.8906 | 0.0669 | | 8.9688 | 0.0111 | 115 | 8.8828 | 0.0673 | | 8.9766 | 0.0112 | 116 | 8.875 | 0.0677 | | 8.9297 | 0.0113 | 117 | 8.8672 | 0.0682 | | 8.9297 | 0.0114 | 118 | 8.8594 | 0.0689 | | 8.8672 | 0.0115 | 119 | 8.8516 | 0.0694 | | 8.8906 | 0.0116 | 120 | 8.8359 | 0.0700 | | 8.8984 | 0.0117 | 121 | 8.8281 | 0.0703 | | 8.8984 | 0.0118 | 122 | 8.8203 | 0.0704 | | 8.8828 | 0.0119 | 123 | 8.8125 | 0.0706 | | 8.8594 | 0.0120 | 124 | 8.8047 | 0.0707 | | 8.8281 | 0.0121 | 125 | 8.7969 | 0.0708 | | 8.8359 | 0.0122 | 126 | 8.7812 | 0.0710 | | 8.8359 | 0.0123 | 127 | 8.7734 | 0.0711 | | 8.8281 | 0.0124 | 128 | 8.7656 | 0.0710 | | 8.8438 | 0.0125 | 129 | 8.7578 | 0.0707 | | 8.7578 | 0.0126 | 130 | 8.75 | 0.0702 | | 8.7812 | 0.0127 | 131 | 8.7422 | 0.0698 | | 8.7734 | 0.0128 | 132 | 8.7344 | 0.0697 | | 8.7812 | 0.0129 | 133 | 8.7266 | 0.0701 | | 8.7891 | 0.0130 | 134 | 8.7188 | 0.0707 | | 8.7656 | 0.0131 | 135 | 8.7031 | 0.0713 | | 8.7891 | 0.0132 | 136 | 8.6953 | 0.0719 | | 8.7188 | 0.0133 | 137 | 8.6875 | 0.0726 | | 8.7266 | 0.0134 | 138 | 8.6797 | 0.0733 | | 8.75 | 0.0135 | 139 | 8.6719 | 0.0737 | | 8.7188 | 0.0136 | 140 | 8.6641 | 0.0740 | | 8.7344 | 0.0137 | 141 | 8.6562 | 0.0742 | | 8.6641 | 0.0138 | 142 | 8.6484 | 0.0742 | | 8.7031 | 0.0139 | 143 | 8.6406 | 0.0741 | | 8.6797 | 0.0139 | 144 | 8.6328 | 0.0741 | | 8.6797 | 0.0140 | 145 | 8.6172 | 0.0739 | | 8.6719 | 0.0141 | 146 | 8.6094 | 0.0736 | | 8.6641 | 0.0142 | 147 | 8.6016 | 0.0736 | | 8.6484 | 0.0143 | 148 | 8.5938 | 0.0737 | | 8.6172 | 0.0144 | 149 | 8.5859 | 0.0741 | | 8.6719 | 0.0145 | 150 | 8.5781 | 0.0746 | | 8.6406 | 0.0146 | 151 | 8.5703 | 0.0750 | | 8.6172 | 0.0147 | 152 | 8.5625 | 0.0754 | | 8.6094 | 0.0148 | 153 | 8.5547 | 0.0756 | | 8.6016 | 0.0149 | 154 | 8.5469 | 0.0756 | | 8.5625 | 0.0150 | 155 | 8.5391 | 0.0755 | | 8.5312 | 0.0151 | 156 | 8.5312 | 0.0756 | | 8.5703 | 0.0152 | 157 | 8.5234 | 0.0756 | | 8.6172 | 0.0153 | 158 | 8.5156 | 0.0757 | | 8.5781 | 0.0154 | 159 | 8.5078 | 0.0757 | | 8.6016 | 0.0155 | 160 | 8.5 | 0.0759 | | 8.5547 | 0.0156 | 161 | 8.4922 | 0.0762 | | 8.5547 | 0.0157 | 162 | 8.4844 | 0.0766 | | 8.5312 | 0.0158 | 163 | 8.4766 | 0.0767 | | 8.5 | 0.0159 | 164 | 8.4688 | 0.0767 | | 8.5312 | 0.0160 | 165 | 8.4609 | 0.0766 | | 8.5312 | 0.0161 | 166 | 8.4531 | 0.0766 | | 8.4531 | 0.0162 | 167 | 8.4453 | 0.0767 | | 8.4766 | 0.0163 | 168 | 8.4375 | 0.0768 | | 8.4766 | 0.0164 | 169 | 8.4297 | 0.0770 | | 8.4688 | 0.0165 | 170 | 8.4219 | 0.0772 | | 8.4922 | 0.0166 | 171 | 8.4141 | 0.0775 | | 8.4375 | 0.0167 | 172 | 8.4141 | 0.0777 | | 8.4609 | 0.0168 | 173 | 8.4062 | 0.0777 | | 8.4141 | 0.0169 | 174 | 8.3984 | 0.0777 | | 8.4531 | 0.0170 | 175 | 8.3906 | 0.0778 | | 8.3984 | 0.0170 | 176 | 8.3828 | 0.0778 | | 8.4141 | 0.0171 | 177 | 8.375 | 0.0779 | | 8.4453 | 0.0172 | 178 | 8.3672 | 0.0781 | | 8.4219 | 0.0173 | 179 | 8.3594 | 0.0783 | | 8.4219 | 0.0174 | 180 | 8.3516 | 0.0785 | | 8.4062 | 0.0175 | 181 | 8.3438 | 0.0785 | | 8.3984 | 0.0176 | 182 | 8.3359 | 0.0787 | | 8.3828 | 0.0177 | 183 | 8.3281 | 0.0790 | | 8.375 | 0.0178 | 184 | 8.3203 | 0.0792 | | 8.3594 | 0.0179 | 185 | 8.3125 | 0.0795 | | 8.375 | 0.0180 | 186 | 8.3125 | 0.0797 | | 8.3125 | 0.0181 | 187 | 8.3047 | 0.0796 | | 8.3438 | 0.0182 | 188 | 8.2969 | 0.0796 | | 8.3281 | 0.0183 | 189 | 8.2891 | 0.0795 | | 8.3359 | 0.0184 | 190 | 8.2812 | 0.0795 | | 8.3047 | 0.0185 | 191 | 8.2734 | 0.0798 | | 8.3359 | 0.0186 | 192 | 8.2656 | 0.0800 | | 8.3047 | 0.0187 | 193 | 8.2578 | 0.0803 | | 8.2969 | 0.0188 | 194 | 8.2578 | 0.0805 | | 8.3203 | 0.0189 | 195 | 8.25 | 0.0807 | | 8.2734 | 0.0190 | 196 | 8.2422 | 0.0809 | | 8.25 | 0.0191 | 197 | 8.2344 | 0.0809 | | 8.2734 | 0.0192 | 198 | 8.2266 | 0.0810 | | 8.2109 | 0.0193 | 199 | 8.2188 | 0.0809 | | 8.25 | 0.0194 | 200 | 8.2109 | 0.0809 | | 8.2734 | 0.0195 | 201 | 8.2031 | 0.0810 | | 8.2188 | 0.0196 | 202 | 8.2031 | 0.0812 | | 8.2578 | 0.0197 | 203 | 8.1953 | 0.0816 | | 8.2344 | 0.0198 | 204 | 8.1875 | 0.0819 | | 8.2969 | 0.0199 | 205 | 8.1797 | 0.0823 | | 8.2812 | 0.0200 | 206 | 8.1719 | 0.0825 | | 8.2578 | 0.0201 | 207 | 8.1641 | 0.0824 | | 8.2031 | 0.0201 | 208 | 8.1641 | 0.0824 | | 8.1953 | 0.0202 | 209 | 8.1562 | 0.0822 | | 8.2344 | 0.0203 | 210 | 8.1484 | 0.0821 | | 8.1484 | 0.0204 | 211 | 8.1406 | 0.0822 | | 8.2188 | 0.0205 | 212 | 8.1328 | 0.0824 | | 8.1406 | 0.0206 | 213 | 8.1328 | 0.0826 | | 8.1641 | 0.0207 | 214 | 8.125 | 0.0829 | | 8.1328 | 0.0208 | 215 | 8.1172 | 0.0831 | | 8.1875 | 0.0209 | 216 | 8.1094 | 0.0833 | | 8.1719 | 0.0210 | 217 | 8.1016 | 0.0835 | | 8.125 | 0.0211 | 218 | 8.1016 | 0.0835 | | 8.1172 | 0.0212 | 219 | 8.0938 | 0.0835 | | 8.1172 | 0.0213 | 220 | 8.0859 | 0.0834 | | 8.1562 | 0.0214 | 221 | 8.0781 | 0.0835 | | 8.0781 | 0.0215 | 222 | 8.0781 | 0.0838 | | 8.1094 | 0.0216 | 223 | 8.0703 | 0.0840 | | 8.0938 | 0.0217 | 224 | 8.0625 | 0.0843 | | 8.0938 | 0.0218 | 225 | 8.0547 | 0.0846 | | 8.1016 | 0.0219 | 226 | 8.0469 | 0.0847 | | 8.1094 | 0.0220 | 227 | 8.0469 | 0.0846 | | 8.1016 | 0.0221 | 228 | 8.0391 | 0.0844 | | 8.0859 | 0.0222 | 229 | 8.0312 | 0.0844 | | 8.0859 | 0.0223 | 230 | 8.0312 | 0.0845 | | 8.1094 | 0.0224 | 231 | 8.0234 | 0.0849 | | 8.1016 | 0.0225 | 232 | 8.0156 | 0.0853 | | 8.0859 | 0.0226 | 233 | 8.0078 | 0.0856 | | 8.0859 | 0.0227 | 234 | 8.0078 | 0.0857 | | 8.0781 | 0.0228 | 235 | 8.0 | 0.0857 | | 8.0234 | 0.0229 | 236 | 7.9922 | 0.0856 | | 8.0391 | 0.0230 | 237 | 7.9883 | 0.0855 | | 8.0078 | 0.0231 | 238 | 7.9844 | 0.0855 | | 8.0078 | 0.0232 | 239 | 7.9766 | 0.0857 | | 7.9883 | 0.0232 | 240 | 7.9727 | 0.0862 | | 7.9805 | 0.0233 | 241 | 7.9648 | 0.0865 | | 8.0234 | 0.0234 | 242 | 7.9609 | 0.0868 | | 7.9961 | 0.0235 | 243 | 7.9570 | 0.0870 | | 8.0156 | 0.0236 | 244 | 7.9492 | 0.0870 | | 7.9766 | 0.0237 | 245 | 7.9453 | 0.0869 | | 7.9297 | 0.0238 | 246 | 7.9414 | 0.0866 | | 7.9336 | 0.0239 | 247 | 7.9375 | 0.0865 | | 7.9219 | 0.0240 | 248 | 7.9297 | 0.0866 | | 7.957 | 0.0241 | 249 | 7.9258 | 0.0869 | | 7.9453 | 0.0242 | 250 | 7.9180 | 0.0874 | | 7.9805 | 0.0243 | 251 | 7.9141 | 0.0879 | | 7.9531 | 0.0244 | 252 | 7.9102 | 0.0883 | | 7.9102 | 0.0245 | 253 | 7.9062 | 0.0885 | | 7.9844 | 0.0246 | 254 | 7.8984 | 0.0886 | | 7.9414 | 0.0247 | 255 | 7.8945 | 0.0885 | | 7.9453 | 0.0248 | 256 | 7.8906 | 0.0883 | | 7.9219 | 0.0249 | 257 | 7.8867 | 0.0883 | | 7.9141 | 0.0250 | 258 | 7.8828 | 0.0885 | | 7.9258 | 0.0251 | 259 | 7.875 | 0.0889 | | 7.957 | 0.0252 | 260 | 7.8711 | 0.0893 | | 7.8984 | 0.0253 | 261 | 7.8672 | 0.0896 | | 7.8945 | 0.0254 | 262 | 7.8633 | 0.0898 | | 7.9141 | 0.0255 | 263 | 7.8594 | 0.0899 | | 7.9453 | 0.0256 | 264 | 7.8555 | 0.0899 | | 7.8672 | 0.0257 | 265 | 7.8477 | 0.0900 | | 7.9375 | 0.0258 | 266 | 7.8438 | 0.0902 | | 7.9219 | 0.0259 | 267 | 7.8398 | 0.0905 | | 7.8555 | 0.0260 | 268 | 7.8359 | 0.0907 | | 7.8984 | 0.0261 | 269 | 7.8320 | 0.0908 | | 7.8906 | 0.0262 | 270 | 7.8281 | 0.0909 | | 7.8711 | 0.0263 | 271 | 7.8242 | 0.0910 | | 7.8633 | 0.0263 | 272 | 7.8203 | 0.0909 | | 7.8633 | 0.0264 | 273 | 7.8164 | 0.0909 | | 7.8789 | 0.0265 | 274 | 7.8125 | 0.0909 | | 7.8438 | 0.0266 | 275 | 7.8086 | 0.0910 | | 7.8789 | 0.0267 | 276 | 7.8047 | 0.0911 | | 7.8516 | 0.0268 | 277 | 7.8008 | 0.0912 | | 7.8711 | 0.0269 | 278 | 7.7969 | 0.0913 | | 7.8008 | 0.0270 | 279 | 7.7930 | 0.0916 | | 7.8477 | 0.0271 | 280 | 7.7891 | 0.0918 | | 7.8086 | 0.0272 | 281 | 7.7852 | 0.0919 | | 7.8398 | 0.0273 | 282 | 7.7812 | 0.0920 | | 7.8008 | 0.0274 | 283 | 7.7773 | 0.0922 | | 7.8281 | 0.0275 | 284 | 7.7734 | 0.0922 | | 7.7852 | 0.0276 | 285 | 7.7695 | 0.0926 | | 7.793 | 0.0277 | 286 | 7.7656 | 0.0929 | | 7.8086 | 0.0278 | 287 | 7.7617 | 0.0931 | | 7.7812 | 0.0279 | 288 | 7.7578 | 0.0931 | | 7.793 | 0.0280 | 289 | 7.7539 | 0.0931 | | 7.7539 | 0.0281 | 290 | 7.75 | 0.0931 | | 7.75 | 0.0282 | 291 | 7.7461 | 0.0930 | | 7.8164 | 0.0283 | 292 | 7.7422 | 0.0930 | | 7.7539 | 0.0284 | 293 | 7.7422 | 0.0931 | | 7.8086 | 0.0285 | 294 | 7.7383 | 0.0932 | | 7.793 | 0.0286 | 295 | 7.7344 | 0.0936 | | 7.7695 | 0.0287 | 296 | 7.7305 | 0.0937 | | 7.75 | 0.0288 | 297 | 7.7266 | 0.0938 | | 7.7891 | 0.0289 | 298 | 7.7227 | 0.0938 | | 7.7773 | 0.0290 | 299 | 7.7188 | 0.0936 | | 7.7227 | 0.0291 | 300 | 7.7148 | 0.0935 | | 7.7109 | 0.0292 | 301 | 7.7148 | 0.0937 | | 7.7148 | 0.0293 | 302 | 7.7109 | 0.0939 | | 7.7812 | 0.0294 | 303 | 7.7070 | 0.0940 | | 7.7109 | 0.0294 | 304 | 7.7031 | 0.0941 | | 7.7539 | 0.0295 | 305 | 7.6992 | 0.0942 | | 7.7734 | 0.0296 | 306 | 7.6992 | 0.0943 | | 7.6914 | 0.0297 | 307 | 7.6953 | 0.0943 | | 7.6445 | 0.0298 | 308 | 7.6914 | 0.0944 | | 7.6953 | 0.0299 | 309 | 7.6875 | 0.0945 | | 7.75 | 0.0300 | 310 | 7.6836 | 0.0946 | | 7.7539 | 0.0301 | 311 | 7.6836 | 0.0949 | | 7.6953 | 0.0302 | 312 | 7.6797 | 0.0951 | | 7.7188 | 0.0303 | 313 | 7.6758 | 0.0951 | | 7.6914 | 0.0304 | 314 | 7.6719 | 0.0953 | | 7.7344 | 0.0305 | 315 | 7.6719 | 0.0954 | | 7.7383 | 0.0306 | 316 | 7.6680 | 0.0953 | | 7.6875 | 0.0307 | 317 | 7.6641 | 0.0950 | | 7.6914 | 0.0308 | 318 | 7.6602 | 0.0947 | | 7.6758 | 0.0309 | 319 | 7.6602 | 0.0945 | | 7.6836 | 0.0310 | 320 | 7.6562 | 0.0947 | | 7.6914 | 0.0311 | 321 | 7.6523 | 0.0950 | | 7.6719 | 0.0312 | 322 | 7.6523 | 0.0954 | | 7.6914 | 0.0313 | 323 | 7.6484 | 0.0958 | | 7.6094 | 0.0314 | 324 | 7.6445 | 0.0961 | | 7.7148 | 0.0315 | 325 | 7.6406 | 0.0962 | | 7.6641 | 0.0316 | 326 | 7.6406 | 0.0961 | | 7.6602 | 0.0317 | 327 | 7.6367 | 0.0961 | | 7.7031 | 0.0318 | 328 | 7.6328 | 0.0963 | | 7.6953 | 0.0319 | 329 | 7.6328 | 0.0966 | | 7.6445 | 0.0320 | 330 | 7.6289 | 0.0968 | | 7.6445 | 0.0321 | 331 | 7.625 | 0.0969 | | 7.6445 | 0.0322 | 332 | 7.625 | 0.0969 | | 7.668 | 0.0323 | 333 | 7.6211 | 0.0968 | | 7.6523 | 0.0324 | 334 | 7.6172 | 0.0967 | | 7.6602 | 0.0325 | 335 | 7.6172 | 0.0968 | | 7.6328 | 0.0325 | 336 | 7.6133 | 0.0972 | | 7.6523 | 0.0326 | 337 | 7.6094 | 0.0976 | | 7.6133 | 0.0327 | 338 | 7.6094 | 0.0981 | | 7.6367 | 0.0328 | 339 | 7.6055 | 0.0984 | | 7.6641 | 0.0329 | 340 | 7.6016 | 0.0985 | | 7.6367 | 0.0330 | 341 | 7.6016 | 0.0985 | | 7.6133 | 0.0331 | 342 | 7.5977 | 0.0985 | | 7.6016 | 0.0332 | 343 | 7.5977 | 0.0984 | | 7.668 | 0.0333 | 344 | 7.5938 | 0.0984 | | 7.6172 | 0.0334 | 345 | 7.5898 | 0.0984 | | 7.6016 | 0.0335 | 346 | 7.5898 | 0.0985 | | 7.6328 | 0.0336 | 347 | 7.5859 | 0.0985 | | 7.668 | 0.0337 | 348 | 7.5820 | 0.0986 | | 7.6719 | 0.0338 | 349 | 7.5820 | 0.0987 | | 7.6602 | 0.0339 | 350 | 7.5781 | 0.0989 | | 7.6641 | 0.0340 | 351 | 7.5742 | 0.0992 | | 7.6445 | 0.0341 | 352 | 7.5742 | 0.0994 | | 7.5781 | 0.0342 | 353 | 7.5703 | 0.0995 | | 7.6523 | 0.0343 | 354 | 7.5703 | 0.0996 | | 7.6562 | 0.0344 | 355 | 7.5664 | 0.0996 | | 7.5977 | 0.0345 | 356 | 7.5664 | 0.0998 | | 7.5977 | 0.0346 | 357 | 7.5625 | 0.0998 | | 7.5508 | 0.0347 | 358 | 7.5625 | 0.0997 | | 7.6172 | 0.0348 | 359 | 7.5586 | 0.0997 | | 7.5469 | 0.0349 | 360 | 7.5547 | 0.0997 | | 7.6172 | 0.0350 | 361 | 7.5547 | 0.0997 | | 7.625 | 0.0351 | 362 | 7.5508 | 0.0998 | | 7.6289 | 0.0352 | 363 | 7.5508 | 0.0999 | | 7.5234 | 0.0353 | 364 | 7.5469 | 0.1002 | | 7.5703 | 0.0354 | 365 | 7.5430 | 0.1006 | | 7.5859 | 0.0355 | 366 | 7.5430 | 0.1010 | | 7.5469 | 0.0356 | 367 | 7.5391 | 0.1014 | | 7.5508 | 0.0356 | 368 | 7.5391 | 0.1016 | | 7.6172 | 0.0357 | 369 | 7.5352 | 0.1017 | | 7.6172 | 0.0358 | 370 | 7.5352 | 0.1017 | | 7.5352 | 0.0359 | 371 | 7.5312 | 0.1018 | | 7.5859 | 0.0360 | 372 | 7.5312 | 0.1018 | | 7.5586 | 0.0361 | 373 | 7.5273 | 0.1017 | | 7.6406 | 0.0362 | 374 | 7.5273 | 0.1017 | | 7.5273 | 0.0363 | 375 | 7.5234 | 0.1018 | | 7.5312 | 0.0364 | 376 | 7.5195 | 0.1020 | | 7.5898 | 0.0365 | 377 | 7.5195 | 0.1023 | | 7.5898 | 0.0366 | 378 | 7.5156 | 0.1027 | | 7.543 | 0.0367 | 379 | 7.5156 | 0.1029 | | 7.5156 | 0.0368 | 380 | 7.5117 | 0.1030 | | 7.5664 | 0.0369 | 381 | 7.5117 | 0.1031 | | 7.5625 | 0.0370 | 382 | 7.5078 | 0.1031 | | 7.5312 | 0.0371 | 383 | 7.5078 | 0.1032 | | 7.625 | 0.0372 | 384 | 7.5078 | 0.1032 | | 7.5898 | 0.0373 | 385 | 7.5039 | 0.1034 | | 7.5625 | 0.0374 | 386 | 7.5 | 0.1035 | | 7.5664 | 0.0375 | 387 | 7.5 | 0.1037 | | 7.4609 | 0.0376 | 388 | 7.4961 | 0.1039 | | 7.5469 | 0.0377 | 389 | 7.4961 | 0.1040 | | 7.5742 | 0.0378 | 390 | 7.4922 | 0.1040 | | 7.4375 | 0.0379 | 391 | 7.4922 | 0.1040 | | 7.4961 | 0.0380 | 392 | 7.4883 | 0.1039 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.0a0+32f93b1 - Datasets 2.20.0 - Tokenizers 0.19.1
gokulsrinivasagan/gpt_train_12_384
gokulsrinivasagan
2024-07-02T16:04:42Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "dataset:gokuls/wiki_book_corpus_raw_dataset_tiny", "base_model:openai-community/gpt2", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-07-01T13:54:24Z
--- license: mit base_model: openai-community/gpt2 tags: - generated_from_trainer datasets: - gokuls/wiki_book_corpus_raw_dataset_tiny metrics: - accuracy model-index: - name: gpt_train_12_384 results: - task: name: Causal Language Modeling type: text-generation dataset: name: gokuls/wiki_book_corpus_raw_dataset_tiny type: gokuls/wiki_book_corpus_raw_dataset_tiny metrics: - name: Accuracy type: accuracy value: 0.10244681503029747 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt_train_12_384 This model is a fine-tuned version of [openai-community/gpt2](https://huggingface.co/openai-community/gpt2) on the gokuls/wiki_book_corpus_raw_dataset_tiny dataset. It achieves the following results on the evaluation set: - Loss: 8.8125 - Accuracy: 0.1024 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 10.8984 | 0.0000 | 1 | 10.9062 | 0.0001 | | 10.8984 | 0.0001 | 2 | 10.9062 | 0.0001 | | 10.8984 | 0.0001 | 3 | 10.9062 | 0.0001 | | 10.8984 | 0.0002 | 4 | 10.9062 | 0.0001 | | 10.9062 | 0.0002 | 5 | 10.9062 | 0.0001 | | 10.8984 | 0.0003 | 6 | 10.9062 | 0.0001 | | 10.9062 | 0.0003 | 7 | 10.9062 | 0.0001 | | 10.9062 | 0.0004 | 8 | 10.9062 | 0.0001 | | 10.9062 | 0.0004 | 9 | 10.9062 | 0.0001 | | 10.8984 | 0.0005 | 10 | 10.9062 | 0.0001 | | 10.8984 | 0.0005 | 11 | 10.9062 | 0.0001 | | 10.8984 | 0.0006 | 12 | 10.9062 | 0.0001 | | 10.8984 | 0.0006 | 13 | 10.9062 | 0.0001 | | 10.9062 | 0.0007 | 14 | 10.9062 | 0.0001 | | 10.8984 | 0.0007 | 15 | 10.9062 | 0.0001 | | 10.8984 | 0.0008 | 16 | 10.9062 | 0.0001 | | 10.9062 | 0.0008 | 17 | 10.9062 | 0.0001 | | 10.9062 | 0.0009 | 18 | 10.7578 | 0.0110 | | 10.7734 | 0.0009 | 19 | 10.6562 | 0.0285 | | 10.6797 | 0.0010 | 20 | 10.5781 | 0.0469 | | 10.6016 | 0.0010 | 21 | 10.5234 | 0.0485 | | 10.5234 | 0.0011 | 22 | 10.4766 | 0.0478 | | 10.5 | 0.0011 | 23 | 10.4375 | 0.0483 | | 10.4531 | 0.0012 | 24 | 10.4062 | 0.0507 | | 10.4141 | 0.0012 | 25 | 10.3828 | 0.0531 | | 10.3672 | 0.0013 | 26 | 10.3594 | 0.0556 | | 10.3828 | 0.0013 | 27 | 10.3359 | 0.0562 | | 10.3594 | 0.0014 | 28 | 10.3203 | 0.0562 | | 10.3281 | 0.0014 | 29 | 10.3047 | 0.0559 | | 10.3203 | 0.0015 | 30 | 10.2969 | 0.0563 | | 10.3281 | 0.0015 | 31 | 10.2812 | 0.0566 | | 10.3359 | 0.0015 | 32 | 10.2734 | 0.0566 | | 10.2656 | 0.0016 | 33 | 10.2656 | 0.0570 | | 10.2656 | 0.0016 | 34 | 10.2578 | 0.0561 | | 10.2656 | 0.0017 | 35 | 10.2422 | 0.0562 | | 10.2656 | 0.0017 | 36 | 10.2344 | 0.0575 | | 10.2656 | 0.0018 | 37 | 10.2266 | 0.0586 | | 10.2109 | 0.0018 | 38 | 10.2188 | 0.0593 | | 10.2656 | 0.0019 | 39 | 10.2109 | 0.0596 | | 10.2266 | 0.0019 | 40 | 10.2031 | 0.0599 | | 10.2109 | 0.0020 | 41 | 10.1953 | 0.0601 | | 10.2109 | 0.0020 | 42 | 10.1797 | 0.0604 | | 10.2109 | 0.0021 | 43 | 10.1719 | 0.0608 | | 10.1484 | 0.0021 | 44 | 10.1641 | 0.0610 | | 10.1875 | 0.0022 | 45 | 10.1484 | 0.0611 | | 10.1719 | 0.0022 | 46 | 10.1406 | 0.0612 | | 10.1484 | 0.0023 | 47 | 10.1328 | 0.0615 | | 10.1172 | 0.0023 | 48 | 10.1172 | 0.0622 | | 10.1797 | 0.0024 | 49 | 10.1094 | 0.0632 | | 10.1016 | 0.0024 | 50 | 10.1016 | 0.0642 | | 10.1406 | 0.0025 | 51 | 10.0938 | 0.0651 | | 10.1406 | 0.0025 | 52 | 10.0859 | 0.0658 | | 10.1094 | 0.0026 | 53 | 10.0781 | 0.0663 | | 10.1016 | 0.0026 | 54 | 10.0703 | 0.0669 | | 10.0781 | 0.0027 | 55 | 10.0625 | 0.0672 | | 10.0703 | 0.0027 | 56 | 10.0547 | 0.0678 | | 10.0703 | 0.0028 | 57 | 10.0469 | 0.0681 | | 10.0469 | 0.0028 | 58 | 10.0391 | 0.0686 | | 10.1016 | 0.0029 | 59 | 10.0312 | 0.0689 | | 10.0547 | 0.0029 | 60 | 10.0312 | 0.0694 | | 10.0391 | 0.0030 | 61 | 10.0234 | 0.0695 | | 10.0547 | 0.0030 | 62 | 10.0156 | 0.0692 | | 10.0312 | 0.0031 | 63 | 10.0078 | 0.0688 | | 10.0547 | 0.0031 | 64 | 10.0 | 0.0687 | | 10.0547 | 0.0031 | 65 | 9.9922 | 0.0693 | | 9.9922 | 0.0032 | 66 | 9.9844 | 0.0697 | | 10.0234 | 0.0032 | 67 | 9.9766 | 0.0705 | | 10.0 | 0.0033 | 68 | 9.9688 | 0.0711 | | 10.0 | 0.0033 | 69 | 9.9609 | 0.0715 | | 9.9688 | 0.0034 | 70 | 9.9609 | 0.0716 | | 9.9922 | 0.0034 | 71 | 9.9531 | 0.0717 | | 9.9844 | 0.0035 | 72 | 9.9453 | 0.0716 | | 9.9688 | 0.0035 | 73 | 9.9375 | 0.0718 | | 9.9453 | 0.0036 | 74 | 9.9297 | 0.0726 | | 9.9375 | 0.0036 | 75 | 9.9219 | 0.0734 | | 9.9141 | 0.0037 | 76 | 9.9141 | 0.0744 | | 9.9062 | 0.0037 | 77 | 9.9062 | 0.0751 | | 9.9219 | 0.0038 | 78 | 9.9062 | 0.0755 | | 9.9219 | 0.0038 | 79 | 9.8984 | 0.0756 | | 9.9219 | 0.0039 | 80 | 9.8906 | 0.0757 | | 9.875 | 0.0039 | 81 | 9.8828 | 0.0759 | | 9.9219 | 0.0040 | 82 | 9.875 | 0.0760 | | 9.875 | 0.0040 | 83 | 9.875 | 0.0763 | | 9.8672 | 0.0041 | 84 | 9.8672 | 0.0765 | | 9.9062 | 0.0041 | 85 | 9.8594 | 0.0769 | | 9.8828 | 0.0042 | 86 | 9.8516 | 0.0773 | | 9.8594 | 0.0042 | 87 | 9.8516 | 0.0775 | | 9.8906 | 0.0043 | 88 | 9.8438 | 0.0777 | | 9.8047 | 0.0043 | 89 | 9.8359 | 0.0777 | | 9.8203 | 0.0044 | 90 | 9.8359 | 0.0778 | | 9.8594 | 0.0044 | 91 | 9.8281 | 0.0781 | | 9.8438 | 0.0045 | 92 | 9.8203 | 0.0786 | | 9.8438 | 0.0045 | 93 | 9.8203 | 0.0790 | | 9.8438 | 0.0046 | 94 | 9.8125 | 0.0793 | | 9.8359 | 0.0046 | 95 | 9.8047 | 0.0794 | | 9.8281 | 0.0046 | 96 | 9.8047 | 0.0795 | | 9.8516 | 0.0047 | 97 | 9.7969 | 0.0796 | | 9.8281 | 0.0047 | 98 | 9.7891 | 0.0797 | | 9.7734 | 0.0048 | 99 | 9.7891 | 0.0798 | | 9.8125 | 0.0048 | 100 | 9.7812 | 0.0802 | | 9.8203 | 0.0049 | 101 | 9.7734 | 0.0806 | | 9.8281 | 0.0049 | 102 | 9.7734 | 0.0809 | | 9.7734 | 0.0050 | 103 | 9.7656 | 0.0811 | | 9.7891 | 0.0050 | 104 | 9.7578 | 0.0813 | | 9.8047 | 0.0051 | 105 | 9.7578 | 0.0814 | | 9.7578 | 0.0051 | 106 | 9.75 | 0.0815 | | 9.7734 | 0.0052 | 107 | 9.75 | 0.0816 | | 9.7891 | 0.0052 | 108 | 9.7422 | 0.0818 | | 9.75 | 0.0053 | 109 | 9.7344 | 0.0819 | | 9.75 | 0.0053 | 110 | 9.7344 | 0.0821 | | 9.7266 | 0.0054 | 111 | 9.7266 | 0.0823 | | 9.7656 | 0.0054 | 112 | 9.7188 | 0.0824 | | 9.7812 | 0.0055 | 113 | 9.7188 | 0.0824 | | 9.7734 | 0.0055 | 114 | 9.7109 | 0.0824 | | 9.7266 | 0.0056 | 115 | 9.7109 | 0.0824 | | 9.7266 | 0.0056 | 116 | 9.7031 | 0.0826 | | 9.7109 | 0.0057 | 117 | 9.6953 | 0.0828 | | 9.6719 | 0.0057 | 118 | 9.6953 | 0.0829 | | 9.6953 | 0.0058 | 119 | 9.6875 | 0.0830 | | 9.6719 | 0.0058 | 120 | 9.6875 | 0.0831 | | 9.6953 | 0.0059 | 121 | 9.6797 | 0.0831 | | 9.6875 | 0.0059 | 122 | 9.6797 | 0.0831 | | 9.6719 | 0.0060 | 123 | 9.6719 | 0.0832 | | 9.6719 | 0.0060 | 124 | 9.6641 | 0.0833 | | 9.625 | 0.0061 | 125 | 9.6641 | 0.0833 | | 9.6719 | 0.0061 | 126 | 9.6562 | 0.0834 | | 9.6953 | 0.0062 | 127 | 9.6562 | 0.0836 | | 9.6719 | 0.0062 | 128 | 9.6484 | 0.0837 | | 9.6797 | 0.0062 | 129 | 9.6406 | 0.0838 | | 9.6484 | 0.0063 | 130 | 9.6406 | 0.0839 | | 9.6719 | 0.0063 | 131 | 9.6328 | 0.0839 | | 9.6328 | 0.0064 | 132 | 9.6328 | 0.0839 | | 9.6719 | 0.0064 | 133 | 9.625 | 0.0839 | | 9.6484 | 0.0065 | 134 | 9.6172 | 0.0840 | | 9.6406 | 0.0065 | 135 | 9.6172 | 0.0841 | | 9.6094 | 0.0066 | 136 | 9.6094 | 0.0843 | | 9.625 | 0.0066 | 137 | 9.6094 | 0.0845 | | 9.6562 | 0.0067 | 138 | 9.6016 | 0.0846 | | 9.6172 | 0.0067 | 139 | 9.6016 | 0.0847 | | 9.6094 | 0.0068 | 140 | 9.5938 | 0.0847 | | 9.6562 | 0.0068 | 141 | 9.5859 | 0.0847 | | 9.6562 | 0.0069 | 142 | 9.5859 | 0.0847 | | 9.6562 | 0.0069 | 143 | 9.5781 | 0.0848 | | 9.6016 | 0.0070 | 144 | 9.5781 | 0.0849 | | 9.6094 | 0.0070 | 145 | 9.5703 | 0.0850 | | 9.5938 | 0.0071 | 146 | 9.5703 | 0.0851 | | 9.5703 | 0.0071 | 147 | 9.5625 | 0.0851 | | 9.5859 | 0.0072 | 148 | 9.5625 | 0.0851 | | 9.625 | 0.0072 | 149 | 9.5547 | 0.0852 | | 9.5859 | 0.0073 | 150 | 9.5469 | 0.0854 | | 9.5625 | 0.0073 | 151 | 9.5469 | 0.0855 | | 9.5547 | 0.0074 | 152 | 9.5391 | 0.0856 | | 9.5703 | 0.0074 | 153 | 9.5391 | 0.0858 | | 9.5391 | 0.0075 | 154 | 9.5312 | 0.0858 | | 9.5391 | 0.0075 | 155 | 9.5312 | 0.0859 | | 9.5 | 0.0076 | 156 | 9.5234 | 0.0861 | | 9.5547 | 0.0076 | 157 | 9.5156 | 0.0863 | | 9.5391 | 0.0077 | 158 | 9.5156 | 0.0863 | | 9.5312 | 0.0077 | 159 | 9.5156 | 0.0864 | | 9.5391 | 0.0077 | 160 | 9.5078 | 0.0864 | | 9.4688 | 0.0078 | 161 | 9.5 | 0.0866 | | 9.5547 | 0.0078 | 162 | 9.5 | 0.0867 | | 9.5078 | 0.0079 | 163 | 9.4922 | 0.0869 | | 9.5078 | 0.0079 | 164 | 9.4922 | 0.0870 | | 9.5 | 0.0080 | 165 | 9.4844 | 0.0872 | | 9.5312 | 0.0080 | 166 | 9.4844 | 0.0875 | | 9.5156 | 0.0081 | 167 | 9.4766 | 0.0877 | | 9.4844 | 0.0081 | 168 | 9.4766 | 0.0878 | | 9.4688 | 0.0082 | 169 | 9.4688 | 0.0878 | | 9.5156 | 0.0082 | 170 | 9.4609 | 0.0879 | | 9.4922 | 0.0083 | 171 | 9.4609 | 0.0879 | | 9.4844 | 0.0083 | 172 | 9.4531 | 0.0878 | | 9.5234 | 0.0084 | 173 | 9.4531 | 0.0879 | | 9.4844 | 0.0084 | 174 | 9.4453 | 0.0879 | | 9.4219 | 0.0085 | 175 | 9.4453 | 0.0880 | | 9.4062 | 0.0085 | 176 | 9.4375 | 0.0881 | | 9.4375 | 0.0086 | 177 | 9.4375 | 0.0883 | | 9.4375 | 0.0086 | 178 | 9.4297 | 0.0885 | | 9.4688 | 0.0087 | 179 | 9.4297 | 0.0887 | | 9.4453 | 0.0087 | 180 | 9.4219 | 0.0888 | | 9.4219 | 0.0088 | 181 | 9.4219 | 0.0890 | | 9.4141 | 0.0088 | 182 | 9.4141 | 0.0890 | | 9.4375 | 0.0089 | 183 | 9.4062 | 0.0890 | | 9.3984 | 0.0089 | 184 | 9.4062 | 0.0890 | | 9.4297 | 0.0090 | 185 | 9.3984 | 0.0891 | | 9.3984 | 0.0090 | 186 | 9.3984 | 0.0891 | | 9.3906 | 0.0091 | 187 | 9.3906 | 0.0892 | | 9.4219 | 0.0091 | 188 | 9.3906 | 0.0893 | | 9.4062 | 0.0092 | 189 | 9.3828 | 0.0895 | | 9.375 | 0.0092 | 190 | 9.3828 | 0.0897 | | 9.3828 | 0.0093 | 191 | 9.375 | 0.0898 | | 9.3906 | 0.0093 | 192 | 9.375 | 0.0898 | | 9.3906 | 0.0093 | 193 | 9.3672 | 0.0899 | | 9.4141 | 0.0094 | 194 | 9.3672 | 0.0898 | | 9.3203 | 0.0094 | 195 | 9.3594 | 0.0898 | | 9.3906 | 0.0095 | 196 | 9.3594 | 0.0898 | | 9.3594 | 0.0095 | 197 | 9.3516 | 0.0900 | | 9.3516 | 0.0096 | 198 | 9.3516 | 0.0901 | | 9.3438 | 0.0096 | 199 | 9.3438 | 0.0902 | | 9.3516 | 0.0097 | 200 | 9.3438 | 0.0904 | | 9.3125 | 0.0097 | 201 | 9.3359 | 0.0906 | | 9.3516 | 0.0098 | 202 | 9.3359 | 0.0907 | | 9.3359 | 0.0098 | 203 | 9.3281 | 0.0908 | | 9.3516 | 0.0099 | 204 | 9.3281 | 0.0907 | | 9.3281 | 0.0099 | 205 | 9.3203 | 0.0906 | | 9.375 | 0.0100 | 206 | 9.3125 | 0.0905 | | 9.2812 | 0.0100 | 207 | 9.3125 | 0.0904 | | 9.3281 | 0.0101 | 208 | 9.3047 | 0.0906 | | 9.3281 | 0.0101 | 209 | 9.3047 | 0.0908 | | 9.3594 | 0.0102 | 210 | 9.2969 | 0.0912 | | 9.3438 | 0.0102 | 211 | 9.2969 | 0.0915 | | 9.2891 | 0.0103 | 212 | 9.2891 | 0.0916 | | 9.3438 | 0.0103 | 213 | 9.2891 | 0.0916 | | 9.3047 | 0.0104 | 214 | 9.2812 | 0.0915 | | 9.2656 | 0.0104 | 215 | 9.2812 | 0.0914 | | 9.2734 | 0.0105 | 216 | 9.2734 | 0.0913 | | 9.2891 | 0.0105 | 217 | 9.2734 | 0.0913 | | 9.2969 | 0.0106 | 218 | 9.2656 | 0.0913 | | 9.25 | 0.0106 | 219 | 9.2656 | 0.0914 | | 9.2578 | 0.0107 | 220 | 9.2578 | 0.0915 | | 9.25 | 0.0107 | 221 | 9.2578 | 0.0916 | | 9.2656 | 0.0108 | 222 | 9.25 | 0.0920 | | 9.2578 | 0.0108 | 223 | 9.25 | 0.0923 | | 9.2734 | 0.0108 | 224 | 9.2422 | 0.0926 | | 9.2891 | 0.0109 | 225 | 9.2422 | 0.0929 | | 9.25 | 0.0109 | 226 | 9.2344 | 0.0928 | | 9.2344 | 0.0110 | 227 | 9.2344 | 0.0928 | | 9.2656 | 0.0110 | 228 | 9.2266 | 0.0927 | | 9.2656 | 0.0111 | 229 | 9.2266 | 0.0928 | | 9.2656 | 0.0111 | 230 | 9.2188 | 0.0930 | | 9.25 | 0.0112 | 231 | 9.2188 | 0.0933 | | 9.2891 | 0.0112 | 232 | 9.2109 | 0.0937 | | 9.2188 | 0.0113 | 233 | 9.2031 | 0.0938 | | 9.2578 | 0.0113 | 234 | 9.2031 | 0.0939 | | 9.2422 | 0.0114 | 235 | 9.1953 | 0.0938 | | 9.2109 | 0.0114 | 236 | 9.1953 | 0.0935 | | 9.1797 | 0.0115 | 237 | 9.1953 | 0.0935 | | 9.1953 | 0.0115 | 238 | 9.1875 | 0.0938 | | 9.1797 | 0.0116 | 239 | 9.1875 | 0.0943 | | 9.2266 | 0.0116 | 240 | 9.1797 | 0.0948 | | 9.2109 | 0.0117 | 241 | 9.1719 | 0.0951 | | 9.1719 | 0.0117 | 242 | 9.1719 | 0.0954 | | 9.2031 | 0.0118 | 243 | 9.1719 | 0.0955 | | 9.1953 | 0.0118 | 244 | 9.1641 | 0.0954 | | 9.1875 | 0.0119 | 245 | 9.1641 | 0.0950 | | 9.2031 | 0.0119 | 246 | 9.1562 | 0.0949 | | 9.1797 | 0.0120 | 247 | 9.1484 | 0.0950 | | 9.1484 | 0.0120 | 248 | 9.1484 | 0.0952 | | 9.1406 | 0.0121 | 249 | 9.1484 | 0.0954 | | 9.1641 | 0.0121 | 250 | 9.1406 | 0.0956 | | 9.1406 | 0.0122 | 251 | 9.1406 | 0.0956 | | 9.1719 | 0.0122 | 252 | 9.1328 | 0.0954 | | 9.125 | 0.0123 | 253 | 9.1328 | 0.0953 | | 9.1719 | 0.0123 | 254 | 9.125 | 0.0950 | | 9.1797 | 0.0124 | 255 | 9.125 | 0.0950 | | 9.0859 | 0.0124 | 256 | 9.1172 | 0.0951 | | 9.1875 | 0.0124 | 257 | 9.1172 | 0.0957 | | 9.1094 | 0.0125 | 258 | 9.1094 | 0.0963 | | 9.0938 | 0.0125 | 259 | 9.1094 | 0.0968 | | 9.1016 | 0.0126 | 260 | 9.1016 | 0.0969 | | 9.1406 | 0.0126 | 261 | 9.1016 | 0.0969 | | 9.0781 | 0.0127 | 262 | 9.0938 | 0.0966 | | 9.1094 | 0.0127 | 263 | 9.0938 | 0.0963 | | 9.1172 | 0.0128 | 264 | 9.0859 | 0.0959 | | 9.1172 | 0.0128 | 265 | 9.0859 | 0.0956 | | 9.125 | 0.0129 | 266 | 9.0859 | 0.0955 | | 9.1094 | 0.0129 | 267 | 9.0781 | 0.0957 | | 9.0781 | 0.0130 | 268 | 9.0781 | 0.0964 | | 9.125 | 0.0130 | 269 | 9.0703 | 0.0973 | | 9.0547 | 0.0131 | 270 | 9.0703 | 0.0980 | | 9.0781 | 0.0131 | 271 | 9.0625 | 0.0983 | | 9.1016 | 0.0132 | 272 | 9.0625 | 0.0981 | | 9.0703 | 0.0132 | 273 | 9.0547 | 0.0975 | | 9.0547 | 0.0133 | 274 | 9.0547 | 0.0969 | | 9.0312 | 0.0133 | 275 | 9.0469 | 0.0964 | | 9.0938 | 0.0134 | 276 | 9.0469 | 0.0964 | | 9.0156 | 0.0134 | 277 | 9.0391 | 0.0967 | | 9.1094 | 0.0135 | 278 | 9.0391 | 0.0973 | | 9.0859 | 0.0135 | 279 | 9.0312 | 0.0980 | | 9.0234 | 0.0136 | 280 | 9.0312 | 0.0984 | | 9.0781 | 0.0136 | 281 | 9.0234 | 0.0984 | | 9.0547 | 0.0137 | 282 | 9.0234 | 0.0983 | | 9.0234 | 0.0137 | 283 | 9.0156 | 0.0979 | | 9.0312 | 0.0138 | 284 | 9.0156 | 0.0978 | | 9.0391 | 0.0138 | 285 | 9.0078 | 0.0978 | | 9.0312 | 0.0139 | 286 | 9.0078 | 0.0980 | | 9.0625 | 0.0139 | 287 | 9.0078 | 0.0982 | | 9.0234 | 0.0139 | 288 | 9.0 | 0.0986 | | 9.0078 | 0.0140 | 289 | 9.0 | 0.0990 | | 9.0 | 0.0140 | 290 | 8.9922 | 0.0996 | | 9.0078 | 0.0141 | 291 | 8.9922 | 0.0997 | | 9.0 | 0.0141 | 292 | 8.9844 | 0.0999 | | 9.0078 | 0.0142 | 293 | 8.9844 | 0.0999 | | 8.9922 | 0.0142 | 294 | 8.9766 | 0.0995 | | 9.0078 | 0.0143 | 295 | 8.9766 | 0.0990 | | 8.9844 | 0.0143 | 296 | 8.9688 | 0.0985 | | 8.9766 | 0.0144 | 297 | 8.9688 | 0.0983 | | 8.9531 | 0.0144 | 298 | 8.9609 | 0.0985 | | 8.9688 | 0.0145 | 299 | 8.9609 | 0.0988 | | 9.0312 | 0.0145 | 300 | 8.9531 | 0.0994 | | 9.0156 | 0.0146 | 301 | 8.9531 | 0.0998 | | 8.9688 | 0.0146 | 302 | 8.9453 | 0.0999 | | 9.0 | 0.0147 | 303 | 8.9453 | 0.0997 | | 8.9375 | 0.0147 | 304 | 8.9375 | 0.0996 | | 8.9766 | 0.0148 | 305 | 8.9375 | 0.0994 | | 8.9375 | 0.0148 | 306 | 8.9375 | 0.0994 | | 8.9688 | 0.0149 | 307 | 8.9297 | 0.0997 | | 8.9531 | 0.0149 | 308 | 8.9297 | 0.0999 | | 8.9531 | 0.0150 | 309 | 8.9219 | 0.1002 | | 8.9062 | 0.0150 | 310 | 8.9219 | 0.1003 | | 8.9375 | 0.0151 | 311 | 8.9141 | 0.1004 | | 8.8828 | 0.0151 | 312 | 8.9141 | 0.1003 | | 8.9219 | 0.0152 | 313 | 8.9062 | 0.1003 | | 8.9219 | 0.0152 | 314 | 8.9062 | 0.1004 | | 8.9297 | 0.0153 | 315 | 8.9062 | 0.1009 | | 8.9922 | 0.0153 | 316 | 8.8984 | 0.1011 | | 8.9062 | 0.0154 | 317 | 8.8984 | 0.1011 | | 8.9297 | 0.0154 | 318 | 8.8906 | 0.1011 | | 8.9531 | 0.0155 | 319 | 8.8906 | 0.1008 | | 8.9531 | 0.0155 | 320 | 8.8828 | 0.1006 | | 8.9375 | 0.0155 | 321 | 8.8828 | 0.1004 | | 8.9219 | 0.0156 | 322 | 8.875 | 0.1002 | | 8.9062 | 0.0156 | 323 | 8.875 | 0.1004 | | 8.8906 | 0.0157 | 324 | 8.875 | 0.1006 | | 8.8906 | 0.0157 | 325 | 8.8672 | 0.1011 | | 8.8672 | 0.0158 | 326 | 8.8672 | 0.1016 | | 8.875 | 0.0158 | 327 | 8.8594 | 0.1019 | | 8.8516 | 0.0159 | 328 | 8.8594 | 0.1022 | | 8.8672 | 0.0159 | 329 | 8.8516 | 0.1020 | | 8.8984 | 0.0160 | 330 | 8.8516 | 0.1018 | | 8.875 | 0.0160 | 331 | 8.8438 | 0.1016 | | 8.8828 | 0.0161 | 332 | 8.8438 | 0.1014 | | 8.8438 | 0.0161 | 333 | 8.8359 | 0.1014 | | 8.7969 | 0.0162 | 334 | 8.8359 | 0.1017 | | 8.8828 | 0.0162 | 335 | 8.8281 | 0.1020 | | 8.8281 | 0.0163 | 336 | 8.8281 | 0.1025 | | 8.8203 | 0.0163 | 337 | 8.8281 | 0.1027 | | 8.8594 | 0.0164 | 338 | 8.8203 | 0.1028 | | 8.8594 | 0.0164 | 339 | 8.8203 | 0.1027 | | 8.8203 | 0.0165 | 340 | 8.8125 | 0.1025 | | 8.8359 | 0.0165 | 341 | 8.8125 | 0.1024 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.0a0+32f93b1 - Datasets 2.20.0 - Tokenizers 0.19.1
gokulsrinivasagan/gpt_train_6_768
gokulsrinivasagan
2024-07-02T16:09:16Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "dataset:gokuls/wiki_book_corpus_raw_dataset_tiny", "base_model:openai-community/gpt2", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-07-01T13:54:47Z
--- license: mit base_model: openai-community/gpt2 tags: - generated_from_trainer datasets: - gokuls/wiki_book_corpus_raw_dataset_tiny metrics: - accuracy model-index: - name: gpt_train_6_768 results: - task: name: Causal Language Modeling type: text-generation dataset: name: gokuls/wiki_book_corpus_raw_dataset_tiny type: gokuls/wiki_book_corpus_raw_dataset_tiny metrics: - name: Accuracy type: accuracy value: 0.11184750327853396 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt_train_6_768 This model is a fine-tuned version of [openai-community/gpt2](https://huggingface.co/openai-community/gpt2) on the gokuls/wiki_book_corpus_raw_dataset_tiny dataset. It achieves the following results on the evaluation set: - Loss: 7.6211 - Accuracy: 0.1118 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 10.9531 | 0.0000 | 1 | 10.9609 | 0.0000 | | 10.9453 | 0.0001 | 2 | 10.9609 | 0.0000 | | 10.9609 | 0.0001 | 3 | 10.9609 | 0.0000 | | 10.9531 | 0.0002 | 4 | 10.9609 | 0.0000 | | 10.9531 | 0.0002 | 5 | 10.9609 | 0.0000 | | 10.9609 | 0.0003 | 6 | 10.9609 | 0.0000 | | 10.9531 | 0.0003 | 7 | 10.9609 | 0.0000 | | 10.9531 | 0.0004 | 8 | 10.9609 | 0.0000 | | 10.9609 | 0.0004 | 9 | 10.9609 | 0.0000 | | 10.9609 | 0.0005 | 10 | 10.9609 | 0.0000 | | 10.9609 | 0.0005 | 11 | 10.9609 | 0.0000 | | 10.9609 | 0.0006 | 12 | 10.9609 | 0.0000 | | 10.9609 | 0.0006 | 13 | 10.9609 | 0.0000 | | 10.9531 | 0.0007 | 14 | 10.9609 | 0.0000 | | 10.9531 | 0.0007 | 15 | 10.9609 | 0.0000 | | 10.9531 | 0.0008 | 16 | 10.9609 | 0.0000 | | 10.9609 | 0.0008 | 17 | 10.9609 | 0.0000 | | 10.9609 | 0.0009 | 18 | 10.7188 | 0.0158 | | 10.7422 | 0.0009 | 19 | 10.5078 | 0.0392 | | 10.5391 | 0.0010 | 20 | 10.3516 | 0.0403 | | 10.4219 | 0.0010 | 21 | 10.2422 | 0.0403 | | 10.2656 | 0.0011 | 22 | 10.1484 | 0.0401 | | 10.2109 | 0.0011 | 23 | 10.0703 | 0.0403 | | 10.125 | 0.0012 | 24 | 10.0078 | 0.0420 | | 10.0312 | 0.0012 | 25 | 9.9531 | 0.0449 | | 9.9766 | 0.0013 | 26 | 9.9062 | 0.0489 | | 9.9844 | 0.0013 | 27 | 9.8672 | 0.0504 | | 9.9062 | 0.0014 | 28 | 9.8359 | 0.0508 | | 9.875 | 0.0014 | 29 | 9.8047 | 0.0506 | | 9.8359 | 0.0015 | 30 | 9.7812 | 0.0511 | | 9.8516 | 0.0015 | 31 | 9.75 | 0.0513 | | 9.875 | 0.0015 | 32 | 9.7344 | 0.0511 | | 9.7109 | 0.0016 | 33 | 9.7109 | 0.0514 | | 9.7266 | 0.0016 | 34 | 9.6953 | 0.0506 | | 9.7344 | 0.0017 | 35 | 9.6797 | 0.0508 | | 9.7344 | 0.0017 | 36 | 9.6641 | 0.0524 | | 9.7422 | 0.0018 | 37 | 9.6484 | 0.0543 | | 9.6094 | 0.0018 | 38 | 9.6328 | 0.0555 | | 9.7188 | 0.0019 | 39 | 9.625 | 0.0562 | | 9.6484 | 0.0019 | 40 | 9.6094 | 0.0572 | | 9.6641 | 0.0020 | 41 | 9.6016 | 0.0573 | | 9.6562 | 0.0020 | 42 | 9.5859 | 0.0573 | | 9.6406 | 0.0021 | 43 | 9.5781 | 0.0590 | | 9.5234 | 0.0021 | 44 | 9.5625 | 0.0605 | | 9.5938 | 0.0022 | 45 | 9.5547 | 0.0615 | | 9.5859 | 0.0022 | 46 | 9.5391 | 0.0623 | | 9.5703 | 0.0023 | 47 | 9.5312 | 0.0626 | | 9.5078 | 0.0023 | 48 | 9.5156 | 0.0627 | | 9.6484 | 0.0024 | 49 | 9.5078 | 0.0628 | | 9.4922 | 0.0024 | 50 | 9.4922 | 0.0629 | | 9.5391 | 0.0025 | 51 | 9.4844 | 0.0633 | | 9.5859 | 0.0025 | 52 | 9.4688 | 0.0639 | | 9.5234 | 0.0026 | 53 | 9.4609 | 0.0647 | | 9.5 | 0.0026 | 54 | 9.4453 | 0.0657 | | 9.4609 | 0.0027 | 55 | 9.4375 | 0.0666 | | 9.4531 | 0.0027 | 56 | 9.4219 | 0.0677 | | 9.4375 | 0.0028 | 57 | 9.4141 | 0.0681 | | 9.4141 | 0.0028 | 58 | 9.3984 | 0.0683 | | 9.4844 | 0.0029 | 59 | 9.3906 | 0.0683 | | 9.4297 | 0.0029 | 60 | 9.3828 | 0.0691 | | 9.375 | 0.0030 | 61 | 9.3672 | 0.0694 | | 9.4219 | 0.0030 | 62 | 9.3594 | 0.0693 | | 9.3672 | 0.0031 | 63 | 9.3438 | 0.0692 | | 9.3906 | 0.0031 | 64 | 9.3359 | 0.0700 | | 9.4375 | 0.0031 | 65 | 9.3203 | 0.0713 | | 9.3203 | 0.0032 | 66 | 9.3125 | 0.0718 | | 9.375 | 0.0032 | 67 | 9.3047 | 0.0723 | | 9.3516 | 0.0033 | 68 | 9.2891 | 0.0726 | | 9.3359 | 0.0033 | 69 | 9.2812 | 0.0725 | | 9.2891 | 0.0034 | 70 | 9.2656 | 0.0722 | | 9.3047 | 0.0034 | 71 | 9.2578 | 0.0721 | | 9.3125 | 0.0035 | 72 | 9.2422 | 0.0721 | | 9.2891 | 0.0035 | 73 | 9.2344 | 0.0728 | | 9.2578 | 0.0036 | 74 | 9.2188 | 0.0741 | | 9.2422 | 0.0036 | 75 | 9.2109 | 0.0751 | | 9.2031 | 0.0037 | 76 | 9.1953 | 0.0760 | | 9.1641 | 0.0037 | 77 | 9.1875 | 0.0765 | | 9.1953 | 0.0038 | 78 | 9.1797 | 0.0767 | | 9.1484 | 0.0038 | 79 | 9.1641 | 0.0767 | | 9.1953 | 0.0039 | 80 | 9.1562 | 0.0768 | | 9.1328 | 0.0039 | 81 | 9.1406 | 0.0772 | | 9.2109 | 0.0040 | 82 | 9.1328 | 0.0773 | | 9.0547 | 0.0040 | 83 | 9.125 | 0.0771 | | 9.1094 | 0.0041 | 84 | 9.1094 | 0.0770 | | 9.1797 | 0.0041 | 85 | 9.1016 | 0.0768 | | 9.1484 | 0.0042 | 86 | 9.0859 | 0.0770 | | 9.1016 | 0.0042 | 87 | 9.0781 | 0.0772 | | 9.1172 | 0.0043 | 88 | 9.0703 | 0.0772 | | 9.0078 | 0.0043 | 89 | 9.0625 | 0.0772 | | 8.9688 | 0.0044 | 90 | 9.0469 | 0.0775 | | 9.0938 | 0.0044 | 91 | 9.0391 | 0.0781 | | 9.0547 | 0.0045 | 92 | 9.0312 | 0.0791 | | 9.0703 | 0.0045 | 93 | 9.0156 | 0.0798 | | 9.0234 | 0.0046 | 94 | 9.0078 | 0.0801 | | 9.0547 | 0.0046 | 95 | 9.0 | 0.0803 | | 9.0391 | 0.0046 | 96 | 8.9844 | 0.0804 | | 9.0703 | 0.0047 | 97 | 8.9766 | 0.0805 | | 9.0234 | 0.0047 | 98 | 8.9688 | 0.0806 | | 8.9062 | 0.0048 | 99 | 8.9609 | 0.0810 | | 8.9688 | 0.0048 | 100 | 8.9453 | 0.0815 | | 8.9609 | 0.0049 | 101 | 8.9375 | 0.0818 | | 9.0391 | 0.0049 | 102 | 8.9297 | 0.0820 | | 8.8984 | 0.0050 | 103 | 8.9141 | 0.0821 | | 8.9688 | 0.0050 | 104 | 8.9062 | 0.0820 | | 8.9922 | 0.0051 | 105 | 8.8984 | 0.0819 | | 8.9062 | 0.0051 | 106 | 8.8906 | 0.0819 | | 8.9062 | 0.0052 | 107 | 8.875 | 0.0822 | | 8.9609 | 0.0052 | 108 | 8.8672 | 0.0826 | | 8.8672 | 0.0053 | 109 | 8.8594 | 0.0831 | | 8.8828 | 0.0053 | 110 | 8.8438 | 0.0836 | | 8.8516 | 0.0054 | 111 | 8.8359 | 0.0840 | | 8.8828 | 0.0054 | 112 | 8.8281 | 0.0844 | | 8.9297 | 0.0055 | 113 | 8.8203 | 0.0845 | | 8.9062 | 0.0055 | 114 | 8.8125 | 0.0847 | | 8.7969 | 0.0056 | 115 | 8.7969 | 0.0851 | | 8.8203 | 0.0056 | 116 | 8.7891 | 0.0855 | | 8.8047 | 0.0057 | 117 | 8.7812 | 0.0858 | | 8.7422 | 0.0057 | 118 | 8.7734 | 0.0858 | | 8.7266 | 0.0058 | 119 | 8.7656 | 0.0860 | | 8.6953 | 0.0058 | 120 | 8.7578 | 0.0861 | | 8.7422 | 0.0059 | 121 | 8.75 | 0.0861 | | 8.75 | 0.0059 | 122 | 8.7344 | 0.0863 | | 8.7422 | 0.0060 | 123 | 8.7266 | 0.0867 | | 8.6953 | 0.0060 | 124 | 8.7188 | 0.0870 | | 8.6328 | 0.0061 | 125 | 8.7109 | 0.0871 | | 8.7188 | 0.0061 | 126 | 8.7031 | 0.0871 | | 8.7891 | 0.0062 | 127 | 8.6953 | 0.0873 | | 8.7344 | 0.0062 | 128 | 8.6875 | 0.0874 | | 8.7578 | 0.0062 | 129 | 8.6797 | 0.0874 | | 8.6953 | 0.0063 | 130 | 8.6641 | 0.0874 | | 8.7266 | 0.0063 | 131 | 8.6562 | 0.0877 | | 8.6562 | 0.0064 | 132 | 8.6484 | 0.0882 | | 8.7188 | 0.0064 | 133 | 8.6406 | 0.0884 | | 8.6797 | 0.0065 | 134 | 8.6328 | 0.0883 | | 8.6562 | 0.0065 | 135 | 8.625 | 0.0883 | | 8.6172 | 0.0066 | 136 | 8.6172 | 0.0885 | | 8.6406 | 0.0066 | 137 | 8.6094 | 0.0889 | | 8.6797 | 0.0067 | 138 | 8.6016 | 0.0895 | | 8.6406 | 0.0067 | 139 | 8.5938 | 0.0901 | | 8.6094 | 0.0068 | 140 | 8.5859 | 0.0905 | | 8.7031 | 0.0068 | 141 | 8.5781 | 0.0906 | | 8.6797 | 0.0069 | 142 | 8.5703 | 0.0906 | | 8.6719 | 0.0069 | 143 | 8.5625 | 0.0905 | | 8.5703 | 0.0070 | 144 | 8.5547 | 0.0905 | | 8.6016 | 0.0070 | 145 | 8.5391 | 0.0907 | | 8.5859 | 0.0071 | 146 | 8.5312 | 0.0909 | | 8.5469 | 0.0071 | 147 | 8.5312 | 0.0911 | | 8.5391 | 0.0072 | 148 | 8.5234 | 0.0913 | | 8.5391 | 0.0072 | 149 | 8.5156 | 0.0915 | | 8.5625 | 0.0073 | 150 | 8.5078 | 0.0918 | | 8.5469 | 0.0073 | 151 | 8.5 | 0.0921 | | 8.5234 | 0.0074 | 152 | 8.4922 | 0.0920 | | 8.5469 | 0.0074 | 153 | 8.4844 | 0.0922 | | 8.4766 | 0.0075 | 154 | 8.4766 | 0.0923 | | 8.4453 | 0.0075 | 155 | 8.4688 | 0.0925 | | 8.375 | 0.0076 | 156 | 8.4609 | 0.0929 | | 8.5156 | 0.0076 | 157 | 8.4531 | 0.0932 | | 8.5234 | 0.0077 | 158 | 8.4453 | 0.0934 | | 8.4844 | 0.0077 | 159 | 8.4375 | 0.0936 | | 8.5 | 0.0077 | 160 | 8.4297 | 0.0938 | | 8.3984 | 0.0078 | 161 | 8.4219 | 0.0936 | | 8.5156 | 0.0078 | 162 | 8.4219 | 0.0935 | | 8.4453 | 0.0079 | 163 | 8.4141 | 0.0934 | | 8.4375 | 0.0079 | 164 | 8.4062 | 0.0937 | | 8.4297 | 0.0080 | 165 | 8.3984 | 0.0944 | | 8.4453 | 0.0080 | 166 | 8.3906 | 0.0953 | | 8.4453 | 0.0081 | 167 | 8.3828 | 0.0961 | | 8.3828 | 0.0081 | 168 | 8.375 | 0.0963 | | 8.3828 | 0.0082 | 169 | 8.3672 | 0.0964 | | 8.4297 | 0.0082 | 170 | 8.3594 | 0.0963 | | 8.3828 | 0.0083 | 171 | 8.3516 | 0.0963 | | 8.3984 | 0.0083 | 172 | 8.3438 | 0.0965 | | 8.4375 | 0.0084 | 173 | 8.3359 | 0.0967 | | 8.3906 | 0.0084 | 174 | 8.3281 | 0.0970 | | 8.2578 | 0.0085 | 175 | 8.3203 | 0.0973 | | 8.2891 | 0.0085 | 176 | 8.3203 | 0.0976 | | 8.3125 | 0.0086 | 177 | 8.3125 | 0.0978 | | 8.3359 | 0.0086 | 178 | 8.3047 | 0.0981 | | 8.375 | 0.0087 | 179 | 8.2969 | 0.0982 | | 8.3125 | 0.0087 | 180 | 8.2891 | 0.0982 | | 8.2656 | 0.0088 | 181 | 8.2812 | 0.0981 | | 8.2812 | 0.0088 | 182 | 8.2734 | 0.0980 | | 8.3203 | 0.0089 | 183 | 8.2656 | 0.0979 | | 8.2344 | 0.0089 | 184 | 8.2656 | 0.0979 | | 8.3203 | 0.0090 | 185 | 8.2578 | 0.0982 | | 8.2422 | 0.0090 | 186 | 8.25 | 0.0986 | | 8.2344 | 0.0091 | 187 | 8.2422 | 0.0990 | | 8.2891 | 0.0091 | 188 | 8.2344 | 0.0995 | | 8.1875 | 0.0092 | 189 | 8.2266 | 0.0997 | | 8.2188 | 0.0092 | 190 | 8.2266 | 0.0997 | | 8.1953 | 0.0093 | 191 | 8.2188 | 0.0994 | | 8.2578 | 0.0093 | 192 | 8.2109 | 0.0991 | | 8.2188 | 0.0093 | 193 | 8.2031 | 0.0991 | | 8.2812 | 0.0094 | 194 | 8.1953 | 0.0991 | | 8.1328 | 0.0094 | 195 | 8.1875 | 0.0992 | | 8.2578 | 0.0095 | 196 | 8.1875 | 0.0992 | | 8.1719 | 0.0095 | 197 | 8.1797 | 0.0996 | | 8.1953 | 0.0096 | 198 | 8.1719 | 0.1000 | | 8.1875 | 0.0096 | 199 | 8.1641 | 0.1002 | | 8.1953 | 0.0097 | 200 | 8.1562 | 0.1006 | | 8.1406 | 0.0097 | 201 | 8.1562 | 0.1008 | | 8.1797 | 0.0098 | 202 | 8.1484 | 0.1008 | | 8.1484 | 0.0098 | 203 | 8.1406 | 0.1006 | | 8.1719 | 0.0099 | 204 | 8.1328 | 0.1004 | | 8.1641 | 0.0099 | 205 | 8.125 | 0.1002 | | 8.2422 | 0.0100 | 206 | 8.125 | 0.1002 | | 8.0703 | 0.0100 | 207 | 8.1172 | 0.1005 | | 8.1328 | 0.0101 | 208 | 8.1094 | 0.1011 | | 8.1562 | 0.0101 | 209 | 8.1016 | 0.1016 | | 8.1797 | 0.0102 | 210 | 8.1016 | 0.1020 | | 8.1641 | 0.0102 | 211 | 8.0938 | 0.1022 | | 8.1016 | 0.0103 | 212 | 8.0859 | 0.1022 | | 8.1719 | 0.0103 | 213 | 8.0781 | 0.1020 | | 8.1094 | 0.0104 | 214 | 8.0703 | 0.1017 | | 8.0469 | 0.0104 | 215 | 8.0703 | 0.1016 | | 8.0859 | 0.0105 | 216 | 8.0625 | 0.1019 | | 8.0625 | 0.0105 | 217 | 8.0547 | 0.1023 | | 8.1406 | 0.0106 | 218 | 8.0547 | 0.1025 | | 8.0547 | 0.0106 | 219 | 8.0469 | 0.1027 | | 8.0234 | 0.0107 | 220 | 8.0391 | 0.1029 | | 8.0469 | 0.0107 | 221 | 8.0391 | 0.1029 | | 8.0312 | 0.0108 | 222 | 8.0312 | 0.1029 | | 8.0391 | 0.0108 | 223 | 8.0234 | 0.1028 | | 8.0391 | 0.0108 | 224 | 8.0156 | 0.1029 | | 8.0859 | 0.0109 | 225 | 8.0156 | 0.1029 | | 8.0391 | 0.0109 | 226 | 8.0078 | 0.1028 | | 7.9883 | 0.0110 | 227 | 8.0 | 0.1028 | | 8.0625 | 0.0110 | 228 | 7.9961 | 0.1029 | | 8.1094 | 0.0111 | 229 | 7.9883 | 0.1032 | | 8.0391 | 0.0111 | 230 | 7.9844 | 0.1034 | | 8.0078 | 0.0112 | 231 | 7.9805 | 0.1037 | | 8.0859 | 0.0112 | 232 | 7.9727 | 0.1039 | | 7.9961 | 0.0113 | 233 | 7.9688 | 0.1039 | | 8.0312 | 0.0113 | 234 | 7.9648 | 0.1039 | | 8.0391 | 0.0114 | 235 | 7.9570 | 0.1037 | | 7.9609 | 0.0114 | 236 | 7.9531 | 0.1037 | | 7.9336 | 0.0115 | 237 | 7.9492 | 0.1038 | | 7.9258 | 0.0115 | 238 | 7.9453 | 0.1040 | | 7.9531 | 0.0116 | 239 | 7.9375 | 0.1042 | | 7.9805 | 0.0116 | 240 | 7.9336 | 0.1045 | | 8.0078 | 0.0117 | 241 | 7.9297 | 0.1048 | | 7.8906 | 0.0117 | 242 | 7.9258 | 0.1051 | | 7.9727 | 0.0118 | 243 | 7.9180 | 0.1054 | | 7.9336 | 0.0118 | 244 | 7.9141 | 0.1055 | | 7.9375 | 0.0119 | 245 | 7.9062 | 0.1055 | | 7.9922 | 0.0119 | 246 | 7.9023 | 0.1054 | | 7.9609 | 0.0120 | 247 | 7.8984 | 0.1053 | | 7.8945 | 0.0120 | 248 | 7.8906 | 0.1053 | | 7.8203 | 0.0121 | 249 | 7.8867 | 0.1055 | | 7.8984 | 0.0121 | 250 | 7.8828 | 0.1057 | | 7.9023 | 0.0122 | 251 | 7.8789 | 0.1058 | | 7.918 | 0.0122 | 252 | 7.875 | 0.1058 | | 7.832 | 0.0123 | 253 | 7.8672 | 0.1058 | | 7.9609 | 0.0123 | 254 | 7.8633 | 0.1056 | | 7.9531 | 0.0124 | 255 | 7.8594 | 0.1057 | | 7.8125 | 0.0124 | 256 | 7.8555 | 0.1059 | | 7.9648 | 0.0124 | 257 | 7.8516 | 0.1066 | | 7.832 | 0.0125 | 258 | 7.8438 | 0.1068 | | 7.8008 | 0.0125 | 259 | 7.8398 | 0.1069 | | 7.8281 | 0.0126 | 260 | 7.8359 | 0.1069 | | 7.8477 | 0.0126 | 261 | 7.8320 | 0.1069 | | 7.8086 | 0.0127 | 262 | 7.8281 | 0.1069 | | 7.8281 | 0.0127 | 263 | 7.8203 | 0.1069 | | 7.8906 | 0.0128 | 264 | 7.8164 | 0.1067 | | 7.8477 | 0.0128 | 265 | 7.8125 | 0.1067 | | 7.8867 | 0.0129 | 266 | 7.8086 | 0.1067 | | 7.8359 | 0.0129 | 267 | 7.8047 | 0.1069 | | 7.7969 | 0.0130 | 268 | 7.8008 | 0.1074 | | 7.8711 | 0.0130 | 269 | 7.7969 | 0.1079 | | 7.7656 | 0.0131 | 270 | 7.7930 | 0.1081 | | 7.8008 | 0.0131 | 271 | 7.7852 | 0.1080 | | 7.8594 | 0.0132 | 272 | 7.7812 | 0.1079 | | 7.8125 | 0.0132 | 273 | 7.7773 | 0.1077 | | 7.7617 | 0.0133 | 274 | 7.7734 | 0.1075 | | 7.7227 | 0.0133 | 275 | 7.7695 | 0.1074 | | 7.8164 | 0.0134 | 276 | 7.7656 | 0.1077 | | 7.7383 | 0.0134 | 277 | 7.7617 | 0.1081 | | 7.8984 | 0.0135 | 278 | 7.7578 | 0.1085 | | 7.793 | 0.0135 | 279 | 7.7539 | 0.1088 | | 7.707 | 0.0136 | 280 | 7.75 | 0.1088 | | 7.8086 | 0.0136 | 281 | 7.7422 | 0.1088 | | 7.7773 | 0.0137 | 282 | 7.7383 | 0.1088 | | 7.6875 | 0.0137 | 283 | 7.7344 | 0.1087 | | 7.7188 | 0.0138 | 284 | 7.7305 | 0.1088 | | 7.7539 | 0.0138 | 285 | 7.7266 | 0.1091 | | 7.8008 | 0.0139 | 286 | 7.7227 | 0.1094 | | 7.7578 | 0.0139 | 287 | 7.7188 | 0.1097 | | 7.7148 | 0.0139 | 288 | 7.7148 | 0.1099 | | 7.7266 | 0.0140 | 289 | 7.7109 | 0.1100 | | 7.7031 | 0.0140 | 290 | 7.7070 | 0.1099 | | 7.7383 | 0.0141 | 291 | 7.7031 | 0.1098 | | 7.7266 | 0.0141 | 292 | 7.6953 | 0.1098 | | 7.75 | 0.0142 | 293 | 7.6914 | 0.1100 | | 7.7031 | 0.0142 | 294 | 7.6914 | 0.1102 | | 7.7305 | 0.0143 | 295 | 7.6875 | 0.1103 | | 7.7188 | 0.0143 | 296 | 7.6836 | 0.1104 | | 7.6719 | 0.0144 | 297 | 7.6797 | 0.1105 | | 7.6289 | 0.0144 | 298 | 7.6758 | 0.1108 | | 7.6719 | 0.0145 | 299 | 7.6719 | 0.1110 | | 7.7695 | 0.0145 | 300 | 7.6641 | 0.1111 | | 7.7812 | 0.0146 | 301 | 7.6602 | 0.1109 | | 7.707 | 0.0146 | 302 | 7.6562 | 0.1110 | | 7.7539 | 0.0147 | 303 | 7.6523 | 0.1111 | | 7.5898 | 0.0147 | 304 | 7.6523 | 0.1114 | | 7.668 | 0.0148 | 305 | 7.6484 | 0.1116 | | 7.6602 | 0.0148 | 306 | 7.6445 | 0.1116 | | 7.6953 | 0.0149 | 307 | 7.6406 | 0.1117 | | 7.7031 | 0.0149 | 308 | 7.6367 | 0.1118 | | 7.6914 | 0.0150 | 309 | 7.6328 | 0.1120 | | 7.582 | 0.0150 | 310 | 7.6289 | 0.1119 | | 7.6445 | 0.0151 | 311 | 7.625 | 0.1118 | | 7.5234 | 0.0151 | 312 | 7.6211 | 0.1118 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.0a0+32f93b1 - Datasets 2.20.0 - Tokenizers 0.19.1
gokulsrinivasagan/gpt_train_12_128
gokulsrinivasagan
2024-07-02T16:07:34Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "dataset:gokuls/wiki_book_corpus_raw_dataset_tiny", "base_model:openai-community/gpt2", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-07-01T13:55:48Z
--- license: mit base_model: openai-community/gpt2 tags: - generated_from_trainer datasets: - gokuls/wiki_book_corpus_raw_dataset_tiny metrics: - accuracy model-index: - name: gpt_train_12_128 results: - task: name: Causal Language Modeling type: text-generation dataset: name: gokuls/wiki_book_corpus_raw_dataset_tiny type: gokuls/wiki_book_corpus_raw_dataset_tiny metrics: - name: Accuracy type: accuracy value: 0.07807518032045319 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt_train_12_128 This model is a fine-tuned version of [openai-community/gpt2](https://huggingface.co/openai-community/gpt2) on the gokuls/wiki_book_corpus_raw_dataset_tiny dataset. It achieves the following results on the evaluation set: - Loss: 10.0781 - Accuracy: 0.0781 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 10.8438 | 0.0001 | 1 | 10.8438 | 0.0103 | | 10.8359 | 0.0001 | 2 | 10.8438 | 0.0103 | | 10.8438 | 0.0002 | 3 | 10.8438 | 0.0103 | | 10.8359 | 0.0003 | 4 | 10.8438 | 0.0103 | | 10.8438 | 0.0004 | 5 | 10.8438 | 0.0103 | | 10.8438 | 0.0004 | 6 | 10.8438 | 0.0103 | | 10.8359 | 0.0005 | 7 | 10.8438 | 0.0103 | | 10.8359 | 0.0006 | 8 | 10.8438 | 0.0103 | | 10.8438 | 0.0007 | 9 | 10.8438 | 0.0103 | | 10.8359 | 0.0007 | 10 | 10.8438 | 0.0103 | | 10.8359 | 0.0008 | 11 | 10.8438 | 0.0103 | | 10.8438 | 0.0009 | 12 | 10.8438 | 0.0103 | | 10.8438 | 0.0009 | 13 | 10.8438 | 0.0103 | | 10.8359 | 0.0010 | 14 | 10.8438 | 0.0103 | | 10.8359 | 0.0011 | 15 | 10.8438 | 0.0103 | | 10.8438 | 0.0012 | 16 | 10.8438 | 0.0103 | | 10.8359 | 0.0012 | 17 | 10.8438 | 0.0103 | | 10.8438 | 0.0013 | 18 | 10.8281 | 0.0113 | | 10.8203 | 0.0014 | 19 | 10.8125 | 0.0116 | | 10.8203 | 0.0015 | 20 | 10.8047 | 0.0117 | | 10.8047 | 0.0015 | 21 | 10.7891 | 0.0118 | | 10.7969 | 0.0016 | 22 | 10.7734 | 0.0118 | | 10.7812 | 0.0017 | 23 | 10.7656 | 0.0118 | | 10.7656 | 0.0017 | 24 | 10.75 | 0.0119 | | 10.7578 | 0.0018 | 25 | 10.7344 | 0.0121 | | 10.75 | 0.0019 | 26 | 10.7266 | 0.0124 | | 10.7344 | 0.0020 | 27 | 10.7188 | 0.0131 | | 10.7266 | 0.0020 | 28 | 10.7031 | 0.0144 | | 10.7109 | 0.0021 | 29 | 10.6953 | 0.0165 | | 10.7031 | 0.0022 | 30 | 10.6875 | 0.0196 | | 10.7031 | 0.0023 | 31 | 10.6797 | 0.0236 | | 10.6875 | 0.0023 | 32 | 10.6719 | 0.0282 | | 10.6797 | 0.0024 | 33 | 10.6641 | 0.0330 | | 10.6719 | 0.0025 | 34 | 10.6641 | 0.0375 | | 10.6719 | 0.0025 | 35 | 10.6562 | 0.0409 | | 10.6719 | 0.0026 | 36 | 10.6484 | 0.0437 | | 10.6484 | 0.0027 | 37 | 10.6484 | 0.0461 | | 10.6562 | 0.0028 | 38 | 10.6406 | 0.0481 | | 10.6484 | 0.0028 | 39 | 10.6406 | 0.0498 | | 10.6484 | 0.0029 | 40 | 10.6328 | 0.0511 | | 10.6406 | 0.0030 | 41 | 10.6328 | 0.0521 | | 10.6406 | 0.0031 | 42 | 10.625 | 0.0529 | | 10.6406 | 0.0031 | 43 | 10.625 | 0.0535 | | 10.625 | 0.0032 | 44 | 10.6172 | 0.0539 | | 10.625 | 0.0033 | 45 | 10.6172 | 0.0542 | | 10.625 | 0.0033 | 46 | 10.6172 | 0.0543 | | 10.6172 | 0.0034 | 47 | 10.6094 | 0.0544 | | 10.625 | 0.0035 | 48 | 10.6094 | 0.0545 | | 10.6172 | 0.0036 | 49 | 10.6016 | 0.0545 | | 10.6016 | 0.0036 | 50 | 10.6016 | 0.0545 | | 10.6016 | 0.0037 | 51 | 10.6016 | 0.0545 | | 10.6016 | 0.0038 | 52 | 10.5938 | 0.0546 | | 10.6016 | 0.0039 | 53 | 10.5938 | 0.0545 | | 10.5938 | 0.0039 | 54 | 10.5938 | 0.0545 | | 10.6016 | 0.0040 | 55 | 10.5859 | 0.0545 | | 10.5859 | 0.0041 | 56 | 10.5859 | 0.0545 | | 10.6016 | 0.0041 | 57 | 10.5859 | 0.0545 | | 10.5859 | 0.0042 | 58 | 10.5859 | 0.0546 | | 10.5859 | 0.0043 | 59 | 10.5781 | 0.0547 | | 10.5781 | 0.0044 | 60 | 10.5781 | 0.0548 | | 10.5781 | 0.0044 | 61 | 10.5781 | 0.0550 | | 10.5781 | 0.0045 | 62 | 10.5703 | 0.0553 | | 10.5781 | 0.0046 | 63 | 10.5703 | 0.0557 | | 10.5703 | 0.0046 | 64 | 10.5703 | 0.0561 | | 10.5781 | 0.0047 | 65 | 10.5625 | 0.0566 | | 10.5625 | 0.0048 | 66 | 10.5625 | 0.0570 | | 10.5781 | 0.0049 | 67 | 10.5625 | 0.0573 | | 10.5703 | 0.0049 | 68 | 10.5547 | 0.0575 | | 10.5625 | 0.0050 | 69 | 10.5547 | 0.0577 | | 10.5625 | 0.0051 | 70 | 10.5547 | 0.0578 | | 10.5625 | 0.0052 | 71 | 10.5547 | 0.0579 | | 10.5547 | 0.0052 | 72 | 10.5469 | 0.0580 | | 10.5469 | 0.0053 | 73 | 10.5469 | 0.0580 | | 10.5469 | 0.0054 | 74 | 10.5469 | 0.0580 | | 10.5547 | 0.0054 | 75 | 10.5391 | 0.0580 | | 10.5547 | 0.0055 | 76 | 10.5391 | 0.0580 | | 10.5469 | 0.0056 | 77 | 10.5391 | 0.0582 | | 10.5469 | 0.0057 | 78 | 10.5391 | 0.0582 | | 10.5312 | 0.0057 | 79 | 10.5312 | 0.0584 | | 10.5312 | 0.0058 | 80 | 10.5312 | 0.0586 | | 10.5312 | 0.0059 | 81 | 10.5312 | 0.0590 | | 10.5312 | 0.0060 | 82 | 10.5312 | 0.0593 | | 10.5312 | 0.0060 | 83 | 10.5234 | 0.0597 | | 10.5234 | 0.0061 | 84 | 10.5234 | 0.0600 | | 10.5312 | 0.0062 | 85 | 10.5234 | 0.0602 | | 10.5312 | 0.0062 | 86 | 10.5234 | 0.0603 | | 10.5234 | 0.0063 | 87 | 10.5156 | 0.0604 | | 10.5156 | 0.0064 | 88 | 10.5156 | 0.0605 | | 10.5234 | 0.0065 | 89 | 10.5156 | 0.0606 | | 10.5156 | 0.0065 | 90 | 10.5156 | 0.0606 | | 10.5156 | 0.0066 | 91 | 10.5078 | 0.0606 | | 10.5156 | 0.0067 | 92 | 10.5078 | 0.0605 | | 10.5156 | 0.0068 | 93 | 10.5078 | 0.0603 | | 10.5156 | 0.0068 | 94 | 10.5078 | 0.0602 | | 10.5234 | 0.0069 | 95 | 10.5 | 0.0601 | | 10.5156 | 0.0070 | 96 | 10.5 | 0.0602 | | 10.5078 | 0.0070 | 97 | 10.5 | 0.0603 | | 10.5 | 0.0071 | 98 | 10.5 | 0.0603 | | 10.5078 | 0.0072 | 99 | 10.5 | 0.0604 | | 10.5078 | 0.0073 | 100 | 10.4922 | 0.0606 | | 10.5 | 0.0073 | 101 | 10.4922 | 0.0607 | | 10.4922 | 0.0074 | 102 | 10.4922 | 0.0609 | | 10.4922 | 0.0075 | 103 | 10.4922 | 0.0612 | | 10.4844 | 0.0076 | 104 | 10.4844 | 0.0614 | | 10.4922 | 0.0076 | 105 | 10.4844 | 0.0617 | | 10.4922 | 0.0077 | 106 | 10.4844 | 0.0619 | | 10.4844 | 0.0078 | 107 | 10.4844 | 0.0622 | | 10.4922 | 0.0078 | 108 | 10.4766 | 0.0625 | | 10.4844 | 0.0079 | 109 | 10.4766 | 0.0628 | | 10.4766 | 0.0080 | 110 | 10.4766 | 0.0630 | | 10.4844 | 0.0081 | 111 | 10.4766 | 0.0632 | | 10.4766 | 0.0081 | 112 | 10.4766 | 0.0634 | | 10.4844 | 0.0082 | 113 | 10.4688 | 0.0636 | | 10.4766 | 0.0083 | 114 | 10.4688 | 0.0638 | | 10.4766 | 0.0084 | 115 | 10.4688 | 0.0640 | | 10.4844 | 0.0084 | 116 | 10.4688 | 0.0643 | | 10.4531 | 0.0085 | 117 | 10.4609 | 0.0644 | | 10.4609 | 0.0086 | 118 | 10.4609 | 0.0647 | | 10.4609 | 0.0086 | 119 | 10.4609 | 0.0648 | | 10.4688 | 0.0087 | 120 | 10.4609 | 0.0649 | | 10.4609 | 0.0088 | 121 | 10.4609 | 0.0651 | | 10.4609 | 0.0089 | 122 | 10.4531 | 0.0653 | | 10.4531 | 0.0089 | 123 | 10.4531 | 0.0656 | | 10.4531 | 0.0090 | 124 | 10.4531 | 0.0659 | | 10.4531 | 0.0091 | 125 | 10.4531 | 0.0660 | | 10.4531 | 0.0092 | 126 | 10.4453 | 0.0662 | | 10.4531 | 0.0092 | 127 | 10.4453 | 0.0664 | | 10.4453 | 0.0093 | 128 | 10.4453 | 0.0667 | | 10.4531 | 0.0094 | 129 | 10.4453 | 0.0670 | | 10.4375 | 0.0094 | 130 | 10.4453 | 0.0673 | | 10.4453 | 0.0095 | 131 | 10.4375 | 0.0676 | | 10.4375 | 0.0096 | 132 | 10.4375 | 0.0678 | | 10.4375 | 0.0097 | 133 | 10.4375 | 0.0679 | | 10.4297 | 0.0097 | 134 | 10.4375 | 0.0679 | | 10.4453 | 0.0098 | 135 | 10.4297 | 0.0678 | | 10.4375 | 0.0099 | 136 | 10.4297 | 0.0677 | | 10.4375 | 0.0100 | 137 | 10.4297 | 0.0677 | | 10.4219 | 0.0100 | 138 | 10.4297 | 0.0677 | | 10.4375 | 0.0101 | 139 | 10.4219 | 0.0678 | | 10.4297 | 0.0102 | 140 | 10.4219 | 0.0680 | | 10.4297 | 0.0102 | 141 | 10.4219 | 0.0682 | | 10.4219 | 0.0103 | 142 | 10.4219 | 0.0684 | | 10.4219 | 0.0104 | 143 | 10.4219 | 0.0687 | | 10.4219 | 0.0105 | 144 | 10.4141 | 0.0689 | | 10.4219 | 0.0105 | 145 | 10.4141 | 0.0692 | | 10.4141 | 0.0106 | 146 | 10.4141 | 0.0693 | | 10.4062 | 0.0107 | 147 | 10.4141 | 0.0695 | | 10.4141 | 0.0108 | 148 | 10.4062 | 0.0696 | | 10.4141 | 0.0108 | 149 | 10.4062 | 0.0697 | | 10.4219 | 0.0109 | 150 | 10.4062 | 0.0697 | | 10.4062 | 0.0110 | 151 | 10.4062 | 0.0698 | | 10.4141 | 0.0110 | 152 | 10.4062 | 0.0700 | | 10.4141 | 0.0111 | 153 | 10.3984 | 0.0701 | | 10.4219 | 0.0112 | 154 | 10.3984 | 0.0702 | | 10.4141 | 0.0113 | 155 | 10.3984 | 0.0704 | | 10.4062 | 0.0113 | 156 | 10.3984 | 0.0705 | | 10.4062 | 0.0114 | 157 | 10.3906 | 0.0707 | | 10.3906 | 0.0115 | 158 | 10.3906 | 0.0708 | | 10.3906 | 0.0116 | 159 | 10.3906 | 0.0710 | | 10.3984 | 0.0116 | 160 | 10.3906 | 0.0711 | | 10.3984 | 0.0117 | 161 | 10.3906 | 0.0711 | | 10.3906 | 0.0118 | 162 | 10.3828 | 0.0712 | | 10.3906 | 0.0118 | 163 | 10.3828 | 0.0712 | | 10.3906 | 0.0119 | 164 | 10.3828 | 0.0714 | | 10.3828 | 0.0120 | 165 | 10.3828 | 0.0715 | | 10.375 | 0.0121 | 166 | 10.375 | 0.0716 | | 10.3828 | 0.0121 | 167 | 10.375 | 0.0717 | | 10.3828 | 0.0122 | 168 | 10.375 | 0.0718 | | 10.3828 | 0.0123 | 169 | 10.375 | 0.0719 | | 10.3828 | 0.0124 | 170 | 10.375 | 0.0721 | | 10.3672 | 0.0124 | 171 | 10.3672 | 0.0721 | | 10.375 | 0.0125 | 172 | 10.3672 | 0.0721 | | 10.3594 | 0.0126 | 173 | 10.3672 | 0.0721 | | 10.375 | 0.0126 | 174 | 10.3672 | 0.0720 | | 10.3594 | 0.0127 | 175 | 10.3594 | 0.0721 | | 10.3672 | 0.0128 | 176 | 10.3594 | 0.0722 | | 10.375 | 0.0129 | 177 | 10.3594 | 0.0723 | | 10.3672 | 0.0129 | 178 | 10.3594 | 0.0726 | | 10.3672 | 0.0130 | 179 | 10.3594 | 0.0727 | | 10.3594 | 0.0131 | 180 | 10.3516 | 0.0728 | | 10.3672 | 0.0132 | 181 | 10.3516 | 0.0729 | | 10.3594 | 0.0132 | 182 | 10.3516 | 0.0730 | | 10.3516 | 0.0133 | 183 | 10.3516 | 0.0731 | | 10.3594 | 0.0134 | 184 | 10.3516 | 0.0732 | | 10.3516 | 0.0134 | 185 | 10.3438 | 0.0733 | | 10.3516 | 0.0135 | 186 | 10.3438 | 0.0733 | | 10.3438 | 0.0136 | 187 | 10.3438 | 0.0734 | | 10.3516 | 0.0137 | 188 | 10.3438 | 0.0734 | | 10.3516 | 0.0137 | 189 | 10.3359 | 0.0735 | | 10.3438 | 0.0138 | 190 | 10.3359 | 0.0735 | | 10.3516 | 0.0139 | 191 | 10.3359 | 0.0735 | | 10.3359 | 0.0139 | 192 | 10.3359 | 0.0737 | | 10.3359 | 0.0140 | 193 | 10.3359 | 0.0737 | | 10.3359 | 0.0141 | 194 | 10.3281 | 0.0736 | | 10.3359 | 0.0142 | 195 | 10.3281 | 0.0736 | | 10.3359 | 0.0142 | 196 | 10.3281 | 0.0736 | | 10.3281 | 0.0143 | 197 | 10.3281 | 0.0737 | | 10.3359 | 0.0144 | 198 | 10.3281 | 0.0738 | | 10.3203 | 0.0145 | 199 | 10.3203 | 0.0740 | | 10.3359 | 0.0145 | 200 | 10.3203 | 0.0741 | | 10.3359 | 0.0146 | 201 | 10.3203 | 0.0742 | | 10.3281 | 0.0147 | 202 | 10.3203 | 0.0743 | | 10.3203 | 0.0147 | 203 | 10.3125 | 0.0743 | | 10.3203 | 0.0148 | 204 | 10.3125 | 0.0743 | | 10.3281 | 0.0149 | 205 | 10.3125 | 0.0743 | | 10.3125 | 0.0150 | 206 | 10.3125 | 0.0741 | | 10.3125 | 0.0150 | 207 | 10.3125 | 0.0740 | | 10.3047 | 0.0151 | 208 | 10.3047 | 0.0740 | | 10.3125 | 0.0152 | 209 | 10.3047 | 0.0741 | | 10.3125 | 0.0153 | 210 | 10.3047 | 0.0742 | | 10.3203 | 0.0153 | 211 | 10.3047 | 0.0743 | | 10.3047 | 0.0154 | 212 | 10.3047 | 0.0744 | | 10.3203 | 0.0155 | 213 | 10.2969 | 0.0745 | | 10.3125 | 0.0155 | 214 | 10.2969 | 0.0747 | | 10.3047 | 0.0156 | 215 | 10.2969 | 0.0749 | | 10.2969 | 0.0157 | 216 | 10.2969 | 0.0750 | | 10.3047 | 0.0158 | 217 | 10.2969 | 0.0750 | | 10.2969 | 0.0158 | 218 | 10.2891 | 0.0749 | | 10.2891 | 0.0159 | 219 | 10.2891 | 0.0747 | | 10.2969 | 0.0160 | 220 | 10.2891 | 0.0744 | | 10.2969 | 0.0161 | 221 | 10.2891 | 0.0742 | | 10.2891 | 0.0161 | 222 | 10.2891 | 0.0741 | | 10.2891 | 0.0162 | 223 | 10.2812 | 0.0742 | | 10.2891 | 0.0163 | 224 | 10.2812 | 0.0743 | | 10.2891 | 0.0163 | 225 | 10.2812 | 0.0746 | | 10.2969 | 0.0164 | 226 | 10.2812 | 0.0748 | | 10.2812 | 0.0165 | 227 | 10.2734 | 0.0749 | | 10.2891 | 0.0166 | 228 | 10.2734 | 0.0750 | | 10.2734 | 0.0166 | 229 | 10.2734 | 0.0751 | | 10.2969 | 0.0167 | 230 | 10.2734 | 0.0750 | | 10.2656 | 0.0168 | 231 | 10.2734 | 0.0749 | | 10.2734 | 0.0169 | 232 | 10.2656 | 0.0747 | | 10.2734 | 0.0169 | 233 | 10.2656 | 0.0747 | | 10.2734 | 0.0170 | 234 | 10.2656 | 0.0746 | | 10.2656 | 0.0171 | 235 | 10.2656 | 0.0747 | | 10.2656 | 0.0171 | 236 | 10.2656 | 0.0748 | | 10.2734 | 0.0172 | 237 | 10.2578 | 0.0749 | | 10.2656 | 0.0173 | 238 | 10.2578 | 0.0752 | | 10.2734 | 0.0174 | 239 | 10.2578 | 0.0755 | | 10.2578 | 0.0174 | 240 | 10.2578 | 0.0756 | | 10.2734 | 0.0175 | 241 | 10.2578 | 0.0756 | | 10.2656 | 0.0176 | 242 | 10.25 | 0.0756 | | 10.2578 | 0.0177 | 243 | 10.25 | 0.0756 | | 10.2578 | 0.0177 | 244 | 10.25 | 0.0756 | | 10.2578 | 0.0178 | 245 | 10.25 | 0.0756 | | 10.2578 | 0.0179 | 246 | 10.25 | 0.0756 | | 10.2578 | 0.0179 | 247 | 10.2422 | 0.0757 | | 10.2578 | 0.0180 | 248 | 10.2422 | 0.0758 | | 10.2422 | 0.0181 | 249 | 10.2422 | 0.0759 | | 10.2422 | 0.0182 | 250 | 10.2422 | 0.0759 | | 10.2422 | 0.0182 | 251 | 10.2422 | 0.0759 | | 10.2422 | 0.0183 | 252 | 10.2344 | 0.0759 | | 10.2422 | 0.0184 | 253 | 10.2344 | 0.0759 | | 10.2422 | 0.0185 | 254 | 10.2344 | 0.0759 | | 10.2422 | 0.0185 | 255 | 10.2344 | 0.0761 | | 10.2422 | 0.0186 | 256 | 10.2344 | 0.0761 | | 10.2422 | 0.0187 | 257 | 10.2266 | 0.0760 | | 10.2422 | 0.0187 | 258 | 10.2266 | 0.0760 | | 10.2344 | 0.0188 | 259 | 10.2266 | 0.0759 | | 10.2344 | 0.0189 | 260 | 10.2266 | 0.0759 | | 10.2266 | 0.0190 | 261 | 10.2266 | 0.0760 | | 10.2188 | 0.0190 | 262 | 10.2188 | 0.0760 | | 10.2266 | 0.0191 | 263 | 10.2188 | 0.0762 | | 10.2266 | 0.0192 | 264 | 10.2188 | 0.0762 | | 10.2188 | 0.0193 | 265 | 10.2188 | 0.0762 | | 10.2266 | 0.0193 | 266 | 10.2188 | 0.0762 | | 10.2188 | 0.0194 | 267 | 10.2109 | 0.0762 | | 10.2109 | 0.0195 | 268 | 10.2109 | 0.0763 | | 10.2109 | 0.0195 | 269 | 10.2109 | 0.0762 | | 10.2109 | 0.0196 | 270 | 10.2109 | 0.0761 | | 10.2188 | 0.0197 | 271 | 10.2109 | 0.0761 | | 10.2109 | 0.0198 | 272 | 10.2031 | 0.0760 | | 10.2188 | 0.0198 | 273 | 10.2031 | 0.0761 | | 10.2266 | 0.0199 | 274 | 10.2031 | 0.0762 | | 10.2188 | 0.0200 | 275 | 10.2031 | 0.0762 | | 10.2109 | 0.0201 | 276 | 10.1953 | 0.0761 | | 10.2109 | 0.0201 | 277 | 10.1953 | 0.0762 | | 10.1953 | 0.0202 | 278 | 10.1953 | 0.0762 | | 10.2031 | 0.0203 | 279 | 10.1953 | 0.0763 | | 10.2188 | 0.0203 | 280 | 10.1953 | 0.0765 | | 10.1953 | 0.0204 | 281 | 10.1875 | 0.0766 | | 10.1953 | 0.0205 | 282 | 10.1875 | 0.0767 | | 10.2031 | 0.0206 | 283 | 10.1875 | 0.0767 | | 10.1797 | 0.0206 | 284 | 10.1875 | 0.0766 | | 10.1953 | 0.0207 | 285 | 10.1875 | 0.0765 | | 10.1953 | 0.0208 | 286 | 10.1797 | 0.0764 | | 10.1875 | 0.0209 | 287 | 10.1797 | 0.0764 | | 10.1953 | 0.0209 | 288 | 10.1797 | 0.0765 | | 10.1875 | 0.0210 | 289 | 10.1797 | 0.0765 | | 10.1875 | 0.0211 | 290 | 10.1797 | 0.0768 | | 10.1797 | 0.0211 | 291 | 10.1719 | 0.0770 | | 10.1719 | 0.0212 | 292 | 10.1719 | 0.0771 | | 10.1719 | 0.0213 | 293 | 10.1719 | 0.0772 | | 10.1797 | 0.0214 | 294 | 10.1719 | 0.0773 | | 10.1797 | 0.0214 | 295 | 10.1719 | 0.0773 | | 10.1641 | 0.0215 | 296 | 10.1641 | 0.0773 | | 10.1719 | 0.0216 | 297 | 10.1641 | 0.0773 | | 10.1719 | 0.0217 | 298 | 10.1641 | 0.0773 | | 10.1719 | 0.0217 | 299 | 10.1641 | 0.0773 | | 10.1719 | 0.0218 | 300 | 10.1641 | 0.0773 | | 10.1641 | 0.0219 | 301 | 10.1641 | 0.0773 | | 10.1562 | 0.0219 | 302 | 10.1562 | 0.0772 | | 10.1719 | 0.0220 | 303 | 10.1562 | 0.0771 | | 10.1562 | 0.0221 | 304 | 10.1562 | 0.0772 | | 10.1641 | 0.0222 | 305 | 10.1562 | 0.0773 | | 10.1562 | 0.0222 | 306 | 10.1484 | 0.0773 | | 10.1641 | 0.0223 | 307 | 10.1484 | 0.0773 | | 10.1719 | 0.0224 | 308 | 10.1484 | 0.0775 | | 10.1562 | 0.0224 | 309 | 10.1484 | 0.0775 | | 10.1719 | 0.0225 | 310 | 10.1484 | 0.0775 | | 10.1562 | 0.0226 | 311 | 10.1406 | 0.0774 | | 10.1562 | 0.0227 | 312 | 10.1406 | 0.0774 | | 10.1562 | 0.0227 | 313 | 10.1406 | 0.0773 | | 10.1406 | 0.0228 | 314 | 10.1406 | 0.0774 | | 10.1406 | 0.0229 | 315 | 10.1406 | 0.0774 | | 10.1406 | 0.0230 | 316 | 10.1406 | 0.0774 | | 10.1328 | 0.0230 | 317 | 10.1328 | 0.0775 | | 10.1484 | 0.0231 | 318 | 10.1328 | 0.0775 | | 10.1328 | 0.0232 | 319 | 10.1328 | 0.0775 | | 10.1328 | 0.0232 | 320 | 10.1328 | 0.0775 | | 10.125 | 0.0233 | 321 | 10.1328 | 0.0775 | | 10.1406 | 0.0234 | 322 | 10.125 | 0.0776 | | 10.1328 | 0.0235 | 323 | 10.125 | 0.0777 | | 10.125 | 0.0235 | 324 | 10.125 | 0.0778 | | 10.125 | 0.0236 | 325 | 10.125 | 0.0777 | | 10.125 | 0.0237 | 326 | 10.125 | 0.0777 | | 10.1328 | 0.0238 | 327 | 10.1172 | 0.0777 | | 10.1172 | 0.0238 | 328 | 10.1172 | 0.0777 | | 10.1172 | 0.0239 | 329 | 10.1172 | 0.0777 | | 10.125 | 0.0240 | 330 | 10.1172 | 0.0778 | | 10.1094 | 0.0240 | 331 | 10.1172 | 0.0778 | | 10.1094 | 0.0241 | 332 | 10.1094 | 0.0777 | | 10.1094 | 0.0242 | 333 | 10.1094 | 0.0776 | | 10.1172 | 0.0243 | 334 | 10.1094 | 0.0775 | | 10.125 | 0.0243 | 335 | 10.1094 | 0.0774 | | 10.1172 | 0.0244 | 336 | 10.1094 | 0.0772 | | 10.1016 | 0.0245 | 337 | 10.1016 | 0.0771 | | 10.1094 | 0.0246 | 338 | 10.1016 | 0.0773 | | 10.1172 | 0.0246 | 339 | 10.1016 | 0.0775 | | 10.1094 | 0.0247 | 340 | 10.1016 | 0.0777 | | 10.1172 | 0.0248 | 341 | 10.1016 | 0.0778 | | 10.0938 | 0.0248 | 342 | 10.0938 | 0.0779 | | 10.1016 | 0.0249 | 343 | 10.0938 | 0.0780 | | 10.0938 | 0.0250 | 344 | 10.0938 | 0.0780 | | 10.0938 | 0.0251 | 345 | 10.0938 | 0.0780 | | 10.1016 | 0.0251 | 346 | 10.0938 | 0.0781 | | 10.1094 | 0.0252 | 347 | 10.0859 | 0.0780 | | 10.0938 | 0.0253 | 348 | 10.0859 | 0.0780 | | 10.0938 | 0.0254 | 349 | 10.0859 | 0.0780 | | 10.0859 | 0.0254 | 350 | 10.0859 | 0.0779 | | 10.0859 | 0.0255 | 351 | 10.0859 | 0.0780 | | 10.0938 | 0.0256 | 352 | 10.0781 | 0.0781 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.0a0+32f93b1 - Datasets 2.20.0 - Tokenizers 0.19.1
gokulsrinivasagan/gpt_train_12_512
gokulsrinivasagan
2024-07-02T16:19:10Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "dataset:gokuls/wiki_book_corpus_raw_dataset_tiny", "base_model:openai-community/gpt2", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-07-01T13:55:52Z
--- license: mit base_model: openai-community/gpt2 tags: - generated_from_trainer datasets: - gokuls/wiki_book_corpus_raw_dataset_tiny metrics: - accuracy model-index: - name: gpt_train_12_512 results: - task: name: Causal Language Modeling type: text-generation dataset: name: gokuls/wiki_book_corpus_raw_dataset_tiny type: gokuls/wiki_book_corpus_raw_dataset_tiny metrics: - name: Accuracy type: accuracy value: 0.09167533902983765 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt_train_12_512 This model is a fine-tuned version of [openai-community/gpt2](https://huggingface.co/openai-community/gpt2) on the gokuls/wiki_book_corpus_raw_dataset_tiny dataset. It achieves the following results on the evaluation set: - Loss: 8.9141 - Accuracy: 0.0917 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 10.8828 | 0.0000 | 1 | 10.8828 | 0.0001 | | 10.8984 | 0.0001 | 2 | 10.8828 | 0.0001 | | 10.8906 | 0.0001 | 3 | 10.8828 | 0.0001 | | 10.8828 | 0.0001 | 4 | 10.8828 | 0.0001 | | 10.8828 | 0.0002 | 5 | 10.8828 | 0.0001 | | 10.8828 | 0.0002 | 6 | 10.8828 | 0.0001 | | 10.8906 | 0.0003 | 7 | 10.8828 | 0.0001 | | 10.8828 | 0.0003 | 8 | 10.8828 | 0.0001 | | 10.875 | 0.0003 | 9 | 10.8828 | 0.0001 | | 10.8984 | 0.0004 | 10 | 10.8828 | 0.0001 | | 10.8828 | 0.0004 | 11 | 10.8828 | 0.0001 | | 10.8906 | 0.0004 | 12 | 10.8828 | 0.0001 | | 10.8828 | 0.0005 | 13 | 10.8828 | 0.0001 | | 10.8828 | 0.0005 | 14 | 10.8828 | 0.0001 | | 10.8828 | 0.0005 | 15 | 10.8828 | 0.0001 | | 10.8828 | 0.0006 | 16 | 10.8828 | 0.0001 | | 10.875 | 0.0006 | 17 | 10.8828 | 0.0001 | | 10.8828 | 0.0007 | 18 | 10.6328 | 0.0197 | | 10.6641 | 0.0007 | 19 | 10.4844 | 0.0444 | | 10.5078 | 0.0007 | 20 | 10.3828 | 0.0499 | | 10.3984 | 0.0008 | 21 | 10.3125 | 0.0532 | | 10.3438 | 0.0008 | 22 | 10.25 | 0.0550 | | 10.2656 | 0.0008 | 23 | 10.2031 | 0.0562 | | 10.25 | 0.0009 | 24 | 10.1641 | 0.0540 | | 10.1875 | 0.0009 | 25 | 10.1328 | 0.0470 | | 10.125 | 0.0009 | 26 | 10.1094 | 0.0461 | | 10.125 | 0.0010 | 27 | 10.0859 | 0.0480 | | 10.0938 | 0.0010 | 28 | 10.0703 | 0.0474 | | 10.0625 | 0.0011 | 29 | 10.0547 | 0.0465 | | 10.0703 | 0.0011 | 30 | 10.0391 | 0.0472 | | 10.0156 | 0.0011 | 31 | 10.0234 | 0.0515 | | 10.0859 | 0.0012 | 32 | 10.0156 | 0.0587 | | 9.9922 | 0.0012 | 33 | 10.0078 | 0.0613 | | 10.0234 | 0.0012 | 34 | 9.9922 | 0.0608 | | 9.9609 | 0.0013 | 35 | 9.9844 | 0.0600 | | 10.0391 | 0.0013 | 36 | 9.9766 | 0.0608 | | 9.9922 | 0.0013 | 37 | 9.9609 | 0.0619 | | 9.9688 | 0.0014 | 38 | 9.9531 | 0.0623 | | 9.9453 | 0.0014 | 39 | 9.9375 | 0.0622 | | 9.9609 | 0.0015 | 40 | 9.9297 | 0.0628 | | 9.9609 | 0.0015 | 41 | 9.9141 | 0.0640 | | 10.0234 | 0.0015 | 42 | 9.8984 | 0.0649 | | 9.9375 | 0.0016 | 43 | 9.8906 | 0.0648 | | 9.8516 | 0.0016 | 44 | 9.875 | 0.0644 | | 9.8672 | 0.0016 | 45 | 9.8594 | 0.0643 | | 9.8984 | 0.0017 | 46 | 9.8438 | 0.0643 | | 9.875 | 0.0017 | 47 | 9.8359 | 0.0645 | | 9.8672 | 0.0017 | 48 | 9.8203 | 0.0646 | | 9.8984 | 0.0018 | 49 | 9.8125 | 0.0649 | | 9.7891 | 0.0018 | 50 | 9.8047 | 0.0653 | | 9.8281 | 0.0019 | 51 | 9.7891 | 0.0655 | | 9.8281 | 0.0019 | 52 | 9.7812 | 0.0654 | | 9.7969 | 0.0019 | 53 | 9.7734 | 0.0660 | | 9.7812 | 0.0020 | 54 | 9.7656 | 0.0670 | | 9.8047 | 0.0020 | 55 | 9.75 | 0.0682 | | 9.7969 | 0.0020 | 56 | 9.7422 | 0.0688 | | 9.7891 | 0.0021 | 57 | 9.7344 | 0.0691 | | 9.6875 | 0.0021 | 58 | 9.7266 | 0.0690 | | 9.7188 | 0.0021 | 59 | 9.7188 | 0.0686 | | 9.7344 | 0.0022 | 60 | 9.7109 | 0.0682 | | 9.7344 | 0.0022 | 61 | 9.6953 | 0.0687 | | 9.7578 | 0.0023 | 62 | 9.6875 | 0.0697 | | 9.6484 | 0.0023 | 63 | 9.6719 | 0.0708 | | 9.6328 | 0.0023 | 64 | 9.6641 | 0.0715 | | 9.7656 | 0.0024 | 65 | 9.6562 | 0.0721 | | 9.6875 | 0.0024 | 66 | 9.6484 | 0.0725 | | 9.6328 | 0.0024 | 67 | 9.6406 | 0.0727 | | 9.6953 | 0.0025 | 68 | 9.6328 | 0.0734 | | 9.7188 | 0.0025 | 69 | 9.625 | 0.0744 | | 9.6875 | 0.0025 | 70 | 9.6172 | 0.0753 | | 9.625 | 0.0026 | 71 | 9.6094 | 0.0763 | | 9.6172 | 0.0026 | 72 | 9.6016 | 0.0769 | | 9.6016 | 0.0027 | 73 | 9.5938 | 0.0771 | | 9.6094 | 0.0027 | 74 | 9.5859 | 0.0771 | | 9.5859 | 0.0027 | 75 | 9.5781 | 0.0771 | | 9.5859 | 0.0028 | 76 | 9.5703 | 0.0767 | | 9.5859 | 0.0028 | 77 | 9.5625 | 0.0765 | | 9.5781 | 0.0028 | 78 | 9.5547 | 0.0764 | | 9.6172 | 0.0029 | 79 | 9.5469 | 0.0763 | | 9.5859 | 0.0029 | 80 | 9.5391 | 0.0768 | | 9.5859 | 0.0029 | 81 | 9.5312 | 0.0770 | | 9.5391 | 0.0030 | 82 | 9.5234 | 0.0770 | | 9.5391 | 0.0030 | 83 | 9.5234 | 0.0764 | | 9.5312 | 0.0031 | 84 | 9.5156 | 0.0758 | | 9.5547 | 0.0031 | 85 | 9.5078 | 0.0757 | | 9.5781 | 0.0031 | 86 | 9.5 | 0.0760 | | 9.5703 | 0.0032 | 87 | 9.4922 | 0.0764 | | 9.4844 | 0.0032 | 88 | 9.4844 | 0.0764 | | 9.5312 | 0.0032 | 89 | 9.4766 | 0.0765 | | 9.5312 | 0.0033 | 90 | 9.4688 | 0.0765 | | 9.5078 | 0.0033 | 91 | 9.4688 | 0.0766 | | 9.5 | 0.0033 | 92 | 9.4609 | 0.0768 | | 9.4844 | 0.0034 | 93 | 9.4531 | 0.0769 | | 9.4688 | 0.0034 | 94 | 9.4453 | 0.0773 | | 9.5156 | 0.0035 | 95 | 9.4375 | 0.0777 | | 9.4453 | 0.0035 | 96 | 9.4297 | 0.0783 | | 9.4766 | 0.0035 | 97 | 9.4219 | 0.0794 | | 9.4219 | 0.0036 | 98 | 9.4219 | 0.0804 | | 9.4531 | 0.0036 | 99 | 9.4141 | 0.0814 | | 9.4141 | 0.0036 | 100 | 9.4062 | 0.0819 | | 9.375 | 0.0037 | 101 | 9.3984 | 0.0825 | | 9.4219 | 0.0037 | 102 | 9.3906 | 0.0828 | | 9.3828 | 0.0037 | 103 | 9.3828 | 0.0828 | | 9.375 | 0.0038 | 104 | 9.3828 | 0.0827 | | 9.3516 | 0.0038 | 105 | 9.375 | 0.0825 | | 9.3906 | 0.0039 | 106 | 9.3672 | 0.0825 | | 9.3672 | 0.0039 | 107 | 9.3594 | 0.0823 | | 9.3359 | 0.0039 | 108 | 9.3516 | 0.0822 | | 9.4062 | 0.0040 | 109 | 9.3438 | 0.0818 | | 9.3906 | 0.0040 | 110 | 9.3438 | 0.0816 | | 9.25 | 0.0040 | 111 | 9.3359 | 0.0816 | | 9.3281 | 0.0041 | 112 | 9.3281 | 0.0816 | | 9.375 | 0.0041 | 113 | 9.3203 | 0.0813 | | 9.3906 | 0.0041 | 114 | 9.3203 | 0.0812 | | 9.3203 | 0.0042 | 115 | 9.3125 | 0.0812 | | 9.3125 | 0.0042 | 116 | 9.3047 | 0.0811 | | 9.3359 | 0.0043 | 117 | 9.2969 | 0.0809 | | 9.2812 | 0.0043 | 118 | 9.2969 | 0.0808 | | 9.2031 | 0.0043 | 119 | 9.2891 | 0.0807 | | 9.2422 | 0.0044 | 120 | 9.2812 | 0.0808 | | 9.3047 | 0.0044 | 121 | 9.2812 | 0.0809 | | 9.2969 | 0.0044 | 122 | 9.2734 | 0.0810 | | 9.25 | 0.0045 | 123 | 9.2656 | 0.0815 | | 9.3281 | 0.0045 | 124 | 9.2578 | 0.0825 | | 9.2656 | 0.0045 | 125 | 9.2578 | 0.0836 | | 9.3047 | 0.0046 | 126 | 9.25 | 0.0845 | | 9.25 | 0.0046 | 127 | 9.2422 | 0.0850 | | 9.2969 | 0.0046 | 128 | 9.2344 | 0.0852 | | 9.3203 | 0.0047 | 129 | 9.2344 | 0.0853 | | 9.25 | 0.0047 | 130 | 9.2266 | 0.0853 | | 9.2422 | 0.0048 | 131 | 9.2188 | 0.0854 | | 9.1641 | 0.0048 | 132 | 9.2109 | 0.0855 | | 9.2109 | 0.0048 | 133 | 9.2109 | 0.0858 | | 9.2422 | 0.0049 | 134 | 9.2031 | 0.0860 | | 9.2188 | 0.0049 | 135 | 9.1953 | 0.0861 | | 9.3047 | 0.0049 | 136 | 9.1875 | 0.0861 | | 9.1641 | 0.0050 | 137 | 9.1875 | 0.0861 | | 9.2188 | 0.0050 | 138 | 9.1797 | 0.0859 | | 9.2422 | 0.0050 | 139 | 9.1719 | 0.0856 | | 9.2422 | 0.0051 | 140 | 9.1719 | 0.0855 | | 9.1484 | 0.0051 | 141 | 9.1641 | 0.0852 | | 9.2422 | 0.0052 | 142 | 9.1562 | 0.0851 | | 9.1953 | 0.0052 | 143 | 9.1484 | 0.0852 | | 9.1641 | 0.0052 | 144 | 9.1484 | 0.0853 | | 9.1875 | 0.0053 | 145 | 9.1406 | 0.0854 | | 9.1172 | 0.0053 | 146 | 9.1328 | 0.0855 | | 9.1094 | 0.0053 | 147 | 9.1328 | 0.0856 | | 9.1328 | 0.0054 | 148 | 9.125 | 0.0859 | | 9.1641 | 0.0054 | 149 | 9.1172 | 0.0863 | | 9.1641 | 0.0054 | 150 | 9.1094 | 0.0868 | | 9.1875 | 0.0055 | 151 | 9.1094 | 0.0873 | | 9.2031 | 0.0055 | 152 | 9.1016 | 0.0875 | | 9.0703 | 0.0056 | 153 | 9.0938 | 0.0880 | | 9.1484 | 0.0056 | 154 | 9.0859 | 0.0884 | | 9.0625 | 0.0056 | 155 | 9.0859 | 0.0888 | | 9.0781 | 0.0057 | 156 | 9.0781 | 0.0889 | | 9.0234 | 0.0057 | 157 | 9.0703 | 0.0892 | | 9.0781 | 0.0057 | 158 | 9.0703 | 0.0894 | | 9.0 | 0.0058 | 159 | 9.0625 | 0.0895 | | 9.0312 | 0.0058 | 160 | 9.0547 | 0.0896 | | 9.0391 | 0.0058 | 161 | 9.0547 | 0.0898 | | 9.0469 | 0.0059 | 162 | 9.0469 | 0.0901 | | 9.0859 | 0.0059 | 163 | 9.0391 | 0.0905 | | 9.0078 | 0.0060 | 164 | 9.0312 | 0.0908 | | 9.0156 | 0.0060 | 165 | 9.0312 | 0.0909 | | 9.0469 | 0.0060 | 166 | 9.0234 | 0.0909 | | 8.9219 | 0.0061 | 167 | 9.0234 | 0.0908 | | 9.0312 | 0.0061 | 168 | 9.0156 | 0.0907 | | 9.0938 | 0.0061 | 169 | 9.0078 | 0.0906 | | 9.0156 | 0.0062 | 170 | 9.0 | 0.0902 | | 9.0312 | 0.0062 | 171 | 9.0 | 0.0897 | | 9.0625 | 0.0062 | 172 | 8.9922 | 0.0893 | | 8.9844 | 0.0063 | 173 | 8.9844 | 0.0891 | | 9.0703 | 0.0063 | 174 | 8.9844 | 0.0894 | | 8.9609 | 0.0064 | 175 | 8.9766 | 0.0898 | | 8.9922 | 0.0064 | 176 | 8.9766 | 0.0905 | | 9.0234 | 0.0064 | 177 | 8.9688 | 0.0910 | | 9.0234 | 0.0065 | 178 | 8.9609 | 0.0915 | | 8.9219 | 0.0065 | 179 | 8.9531 | 0.0919 | | 9.0234 | 0.0065 | 180 | 8.9531 | 0.0920 | | 8.9375 | 0.0066 | 181 | 8.9453 | 0.0921 | | 8.9688 | 0.0066 | 182 | 8.9375 | 0.0919 | | 8.9375 | 0.0066 | 183 | 8.9375 | 0.0913 | | 9.0 | 0.0067 | 184 | 8.9297 | 0.0912 | | 8.9375 | 0.0067 | 185 | 8.9219 | 0.0913 | | 8.9609 | 0.0068 | 186 | 8.9219 | 0.0913 | | 8.9688 | 0.0068 | 187 | 8.9141 | 0.0917 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.0a0+32f93b1 - Datasets 2.20.0 - Tokenizers 0.19.1
gokulsrinivasagan/gpt_train_6_256
gokulsrinivasagan
2024-07-02T16:09:26Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "dataset:gokuls/wiki_book_corpus_raw_dataset_tiny", "base_model:openai-community/gpt2", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-07-01T13:55:52Z
--- license: mit base_model: openai-community/gpt2 tags: - generated_from_trainer datasets: - gokuls/wiki_book_corpus_raw_dataset_tiny metrics: - accuracy model-index: - name: gpt_train_6_256 results: - task: name: Causal Language Modeling type: text-generation dataset: name: gokuls/wiki_book_corpus_raw_dataset_tiny type: gokuls/wiki_book_corpus_raw_dataset_tiny metrics: - name: Accuracy type: accuracy value: 0.08509852797509099 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt_train_6_256 This model is a fine-tuned version of [openai-community/gpt2](https://huggingface.co/openai-community/gpt2) on the gokuls/wiki_book_corpus_raw_dataset_tiny dataset. It achieves the following results on the evaluation set: - Loss: 9.4766 - Accuracy: 0.0851 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 36 - eval_batch_size: 36 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 10.8672 | 0.0001 | 1 | 10.8672 | 0.0045 | | 10.8672 | 0.0001 | 2 | 10.8672 | 0.0045 | | 10.8672 | 0.0002 | 3 | 10.8672 | 0.0045 | | 10.8672 | 0.0002 | 4 | 10.8672 | 0.0045 | | 10.8594 | 0.0003 | 5 | 10.8672 | 0.0045 | | 10.8672 | 0.0003 | 6 | 10.8672 | 0.0045 | | 10.8672 | 0.0004 | 7 | 10.8672 | 0.0045 | | 10.8672 | 0.0004 | 8 | 10.8672 | 0.0045 | | 10.8672 | 0.0005 | 9 | 10.8672 | 0.0045 | | 10.8594 | 0.0005 | 10 | 10.8672 | 0.0045 | | 10.8672 | 0.0006 | 11 | 10.8672 | 0.0045 | | 10.8672 | 0.0007 | 12 | 10.8672 | 0.0045 | | 10.8594 | 0.0007 | 13 | 10.8672 | 0.0045 | | 10.8672 | 0.0008 | 14 | 10.8672 | 0.0045 | | 10.8594 | 0.0008 | 15 | 10.8672 | 0.0045 | | 10.8594 | 0.0009 | 16 | 10.8672 | 0.0045 | | 10.8672 | 0.0009 | 17 | 10.8672 | 0.0045 | | 10.8672 | 0.0010 | 18 | 10.8359 | 0.0086 | | 10.8359 | 0.0010 | 19 | 10.8047 | 0.0108 | | 10.8047 | 0.0011 | 20 | 10.7734 | 0.0113 | | 10.7891 | 0.0011 | 21 | 10.75 | 0.0115 | | 10.7578 | 0.0012 | 22 | 10.7266 | 0.0119 | | 10.7188 | 0.0013 | 23 | 10.7031 | 0.0129 | | 10.7188 | 0.0013 | 24 | 10.6797 | 0.0147 | | 10.6953 | 0.0014 | 25 | 10.6641 | 0.0179 | | 10.6719 | 0.0014 | 26 | 10.6406 | 0.0231 | | 10.6562 | 0.0015 | 27 | 10.625 | 0.0286 | | 10.6641 | 0.0015 | 28 | 10.6094 | 0.0347 | | 10.6328 | 0.0016 | 29 | 10.5938 | 0.0399 | | 10.6016 | 0.0016 | 30 | 10.5781 | 0.0436 | | 10.6016 | 0.0017 | 31 | 10.5703 | 0.0463 | | 10.5938 | 0.0017 | 32 | 10.5547 | 0.0479 | | 10.5781 | 0.0018 | 33 | 10.5469 | 0.0484 | | 10.5547 | 0.0019 | 34 | 10.5312 | 0.0484 | | 10.5469 | 0.0019 | 35 | 10.5234 | 0.0484 | | 10.5391 | 0.0020 | 36 | 10.5156 | 0.0482 | | 10.5312 | 0.0020 | 37 | 10.5078 | 0.0475 | | 10.5312 | 0.0021 | 38 | 10.4922 | 0.0475 | | 10.4922 | 0.0021 | 39 | 10.4844 | 0.0476 | | 10.5078 | 0.0022 | 40 | 10.4844 | 0.0477 | | 10.4922 | 0.0022 | 41 | 10.4766 | 0.0481 | | 10.4844 | 0.0023 | 42 | 10.4688 | 0.0486 | | 10.4766 | 0.0023 | 43 | 10.4609 | 0.0493 | | 10.4844 | 0.0024 | 44 | 10.4531 | 0.0495 | | 10.4688 | 0.0025 | 45 | 10.4453 | 0.0503 | | 10.4844 | 0.0025 | 46 | 10.4453 | 0.0513 | | 10.4609 | 0.0026 | 47 | 10.4375 | 0.0522 | | 10.4453 | 0.0026 | 48 | 10.4297 | 0.0526 | | 10.4453 | 0.0027 | 49 | 10.4297 | 0.0532 | | 10.4297 | 0.0027 | 50 | 10.4219 | 0.0537 | | 10.4219 | 0.0028 | 51 | 10.4141 | 0.0544 | | 10.4297 | 0.0028 | 52 | 10.4141 | 0.0548 | | 10.4375 | 0.0029 | 53 | 10.4062 | 0.0554 | | 10.4219 | 0.0029 | 54 | 10.4062 | 0.0558 | | 10.4141 | 0.0030 | 55 | 10.3984 | 0.0565 | | 10.4141 | 0.0031 | 56 | 10.3906 | 0.0574 | | 10.4219 | 0.0031 | 57 | 10.3906 | 0.0583 | | 10.4219 | 0.0032 | 58 | 10.3828 | 0.0591 | | 10.3984 | 0.0032 | 59 | 10.3828 | 0.0598 | | 10.3984 | 0.0033 | 60 | 10.375 | 0.0603 | | 10.3984 | 0.0033 | 61 | 10.375 | 0.0607 | | 10.3906 | 0.0034 | 62 | 10.3672 | 0.0611 | | 10.3672 | 0.0034 | 63 | 10.3672 | 0.0615 | | 10.3906 | 0.0035 | 64 | 10.3594 | 0.0616 | | 10.3828 | 0.0035 | 65 | 10.3594 | 0.0615 | | 10.3594 | 0.0036 | 66 | 10.3516 | 0.0614 | | 10.3516 | 0.0037 | 67 | 10.3438 | 0.0610 | | 10.3516 | 0.0037 | 68 | 10.3438 | 0.0609 | | 10.3438 | 0.0038 | 69 | 10.3359 | 0.0611 | | 10.3594 | 0.0038 | 70 | 10.3359 | 0.0610 | | 10.3594 | 0.0039 | 71 | 10.3281 | 0.0610 | | 10.3203 | 0.0039 | 72 | 10.3281 | 0.0610 | | 10.3516 | 0.0040 | 73 | 10.3203 | 0.0610 | | 10.3203 | 0.0040 | 74 | 10.3125 | 0.0611 | | 10.3281 | 0.0041 | 75 | 10.3125 | 0.0612 | | 10.3438 | 0.0041 | 76 | 10.3047 | 0.0614 | | 10.2969 | 0.0042 | 77 | 10.3047 | 0.0618 | | 10.3281 | 0.0043 | 78 | 10.2969 | 0.0622 | | 10.2891 | 0.0043 | 79 | 10.2969 | 0.0628 | | 10.3047 | 0.0044 | 80 | 10.2891 | 0.0632 | | 10.2969 | 0.0044 | 81 | 10.2812 | 0.0637 | | 10.2891 | 0.0045 | 82 | 10.2812 | 0.0643 | | 10.3125 | 0.0045 | 83 | 10.2734 | 0.0649 | | 10.2891 | 0.0046 | 84 | 10.2734 | 0.0654 | | 10.2812 | 0.0046 | 85 | 10.2656 | 0.0657 | | 10.3047 | 0.0047 | 86 | 10.2656 | 0.0659 | | 10.2969 | 0.0047 | 87 | 10.2578 | 0.0660 | | 10.2578 | 0.0048 | 88 | 10.25 | 0.0661 | | 10.2812 | 0.0048 | 89 | 10.25 | 0.0662 | | 10.2734 | 0.0049 | 90 | 10.2422 | 0.0663 | | 10.2891 | 0.0050 | 91 | 10.2422 | 0.0664 | | 10.2578 | 0.0050 | 92 | 10.2344 | 0.0666 | | 10.2734 | 0.0051 | 93 | 10.2344 | 0.0668 | | 10.2266 | 0.0051 | 94 | 10.2266 | 0.0671 | | 10.2578 | 0.0052 | 95 | 10.2266 | 0.0674 | | 10.25 | 0.0052 | 96 | 10.2188 | 0.0676 | | 10.2266 | 0.0053 | 97 | 10.2188 | 0.0678 | | 10.2266 | 0.0053 | 98 | 10.2109 | 0.0679 | | 10.2344 | 0.0054 | 99 | 10.2109 | 0.0681 | | 10.2422 | 0.0054 | 100 | 10.2031 | 0.0682 | | 10.2422 | 0.0055 | 101 | 10.2031 | 0.0683 | | 10.2266 | 0.0056 | 102 | 10.1953 | 0.0685 | | 10.2188 | 0.0056 | 103 | 10.1953 | 0.0686 | | 10.2109 | 0.0057 | 104 | 10.1875 | 0.0687 | | 10.1797 | 0.0057 | 105 | 10.1875 | 0.0689 | | 10.1797 | 0.0058 | 106 | 10.1797 | 0.0691 | | 10.1719 | 0.0058 | 107 | 10.1797 | 0.0693 | | 10.1875 | 0.0059 | 108 | 10.1719 | 0.0696 | | 10.1797 | 0.0059 | 109 | 10.1719 | 0.0698 | | 10.1797 | 0.0060 | 110 | 10.1641 | 0.0700 | | 10.1406 | 0.0060 | 111 | 10.1641 | 0.0702 | | 10.1719 | 0.0061 | 112 | 10.1641 | 0.0704 | | 10.1953 | 0.0062 | 113 | 10.1562 | 0.0706 | | 10.1719 | 0.0062 | 114 | 10.1562 | 0.0708 | | 10.1641 | 0.0063 | 115 | 10.1484 | 0.0710 | | 10.1719 | 0.0063 | 116 | 10.1484 | 0.0712 | | 10.1484 | 0.0064 | 117 | 10.1406 | 0.0713 | | 10.1562 | 0.0064 | 118 | 10.1406 | 0.0715 | | 10.1562 | 0.0065 | 119 | 10.1328 | 0.0716 | | 10.1484 | 0.0065 | 120 | 10.1328 | 0.0718 | | 10.1406 | 0.0066 | 121 | 10.125 | 0.0719 | | 10.1328 | 0.0066 | 122 | 10.125 | 0.0721 | | 10.1641 | 0.0067 | 123 | 10.1172 | 0.0722 | | 10.1328 | 0.0068 | 124 | 10.1172 | 0.0723 | | 10.1484 | 0.0068 | 125 | 10.1094 | 0.0725 | | 10.1406 | 0.0069 | 126 | 10.1094 | 0.0726 | | 10.1406 | 0.0069 | 127 | 10.1016 | 0.0728 | | 10.125 | 0.0070 | 128 | 10.1016 | 0.0729 | | 10.1172 | 0.0070 | 129 | 10.0938 | 0.0731 | | 10.1016 | 0.0071 | 130 | 10.0938 | 0.0732 | | 10.1172 | 0.0071 | 131 | 10.0859 | 0.0733 | | 10.1172 | 0.0072 | 132 | 10.0859 | 0.0734 | | 10.1172 | 0.0072 | 133 | 10.0859 | 0.0736 | | 10.0938 | 0.0073 | 134 | 10.0781 | 0.0737 | | 10.1094 | 0.0074 | 135 | 10.0781 | 0.0738 | | 10.1094 | 0.0074 | 136 | 10.0703 | 0.0740 | | 10.0703 | 0.0075 | 137 | 10.0703 | 0.0742 | | 10.0781 | 0.0075 | 138 | 10.0625 | 0.0743 | | 10.0781 | 0.0076 | 139 | 10.0625 | 0.0745 | | 10.0781 | 0.0076 | 140 | 10.0547 | 0.0746 | | 10.0625 | 0.0077 | 141 | 10.0547 | 0.0747 | | 10.0781 | 0.0077 | 142 | 10.0469 | 0.0749 | | 10.0391 | 0.0078 | 143 | 10.0469 | 0.0750 | | 10.0703 | 0.0078 | 144 | 10.0469 | 0.0751 | | 10.0391 | 0.0079 | 145 | 10.0391 | 0.0753 | | 10.0469 | 0.0080 | 146 | 10.0391 | 0.0754 | | 10.0547 | 0.0080 | 147 | 10.0312 | 0.0755 | | 10.0703 | 0.0081 | 148 | 10.0312 | 0.0756 | | 10.0469 | 0.0081 | 149 | 10.0234 | 0.0757 | | 10.0391 | 0.0082 | 150 | 10.0234 | 0.0759 | | 10.0391 | 0.0082 | 151 | 10.0156 | 0.0760 | | 10.0391 | 0.0083 | 152 | 10.0156 | 0.0761 | | 10.0391 | 0.0083 | 153 | 10.0156 | 0.0762 | | 10.0469 | 0.0084 | 154 | 10.0078 | 0.0763 | | 10.0312 | 0.0084 | 155 | 10.0078 | 0.0765 | | 9.9844 | 0.0085 | 156 | 10.0 | 0.0766 | | 10.0 | 0.0086 | 157 | 10.0 | 0.0767 | | 10.0078 | 0.0086 | 158 | 9.9922 | 0.0768 | | 10.0078 | 0.0087 | 159 | 9.9922 | 0.0769 | | 10.0234 | 0.0087 | 160 | 9.9922 | 0.0770 | | 9.9922 | 0.0088 | 161 | 9.9844 | 0.0771 | | 9.9922 | 0.0088 | 162 | 9.9844 | 0.0772 | | 9.9766 | 0.0089 | 163 | 9.9766 | 0.0773 | | 9.9922 | 0.0089 | 164 | 9.9766 | 0.0773 | | 9.9766 | 0.0090 | 165 | 9.9688 | 0.0774 | | 9.9844 | 0.0090 | 166 | 9.9688 | 0.0775 | | 9.9766 | 0.0091 | 167 | 9.9688 | 0.0776 | | 9.9844 | 0.0092 | 168 | 9.9609 | 0.0777 | | 9.9609 | 0.0092 | 169 | 9.9609 | 0.0778 | | 9.9766 | 0.0093 | 170 | 9.9531 | 0.0778 | | 9.9531 | 0.0093 | 171 | 9.9531 | 0.0779 | | 9.9922 | 0.0094 | 172 | 9.9531 | 0.0780 | | 9.9531 | 0.0094 | 173 | 9.9453 | 0.0781 | | 9.9375 | 0.0095 | 174 | 9.9453 | 0.0781 | | 9.9688 | 0.0095 | 175 | 9.9375 | 0.0782 | | 9.9453 | 0.0096 | 176 | 9.9375 | 0.0783 | | 9.9375 | 0.0096 | 177 | 9.9375 | 0.0783 | | 9.9375 | 0.0097 | 178 | 9.9297 | 0.0784 | | 9.9453 | 0.0098 | 179 | 9.9297 | 0.0785 | | 9.9453 | 0.0098 | 180 | 9.9219 | 0.0786 | | 9.9297 | 0.0099 | 181 | 9.9219 | 0.0787 | | 9.9375 | 0.0099 | 182 | 9.9141 | 0.0787 | | 9.9375 | 0.0100 | 183 | 9.9141 | 0.0788 | | 9.8984 | 0.0100 | 184 | 9.9141 | 0.0789 | | 9.9375 | 0.0101 | 185 | 9.9062 | 0.0790 | | 9.9297 | 0.0101 | 186 | 9.9062 | 0.0791 | | 9.9297 | 0.0102 | 187 | 9.8984 | 0.0791 | | 9.9141 | 0.0102 | 188 | 9.8984 | 0.0792 | | 9.9219 | 0.0103 | 189 | 9.8984 | 0.0793 | | 9.8984 | 0.0104 | 190 | 9.8906 | 0.0793 | | 9.8828 | 0.0104 | 191 | 9.8906 | 0.0794 | | 9.8984 | 0.0105 | 192 | 9.8828 | 0.0795 | | 9.8906 | 0.0105 | 193 | 9.8828 | 0.0796 | | 9.9062 | 0.0106 | 194 | 9.8828 | 0.0797 | | 9.875 | 0.0106 | 195 | 9.875 | 0.0798 | | 9.8594 | 0.0107 | 196 | 9.875 | 0.0798 | | 9.8828 | 0.0107 | 197 | 9.875 | 0.0799 | | 9.8984 | 0.0108 | 198 | 9.8672 | 0.0800 | | 9.8906 | 0.0108 | 199 | 9.8672 | 0.0801 | | 9.9062 | 0.0109 | 200 | 9.8594 | 0.0801 | | 9.8672 | 0.0110 | 201 | 9.8594 | 0.0802 | | 9.8672 | 0.0110 | 202 | 9.8594 | 0.0803 | | 9.8906 | 0.0111 | 203 | 9.8516 | 0.0804 | | 9.8828 | 0.0111 | 204 | 9.8516 | 0.0804 | | 9.8906 | 0.0112 | 205 | 9.8438 | 0.0805 | | 9.8828 | 0.0112 | 206 | 9.8438 | 0.0805 | | 9.8594 | 0.0113 | 207 | 9.8438 | 0.0806 | | 9.875 | 0.0113 | 208 | 9.8359 | 0.0806 | | 9.8594 | 0.0114 | 209 | 9.8359 | 0.0807 | | 9.8516 | 0.0114 | 210 | 9.8281 | 0.0808 | | 9.8359 | 0.0115 | 211 | 9.8281 | 0.0809 | | 9.8281 | 0.0116 | 212 | 9.8281 | 0.0810 | | 9.8516 | 0.0116 | 213 | 9.8203 | 0.0810 | | 9.8516 | 0.0117 | 214 | 9.8203 | 0.0811 | | 9.8281 | 0.0117 | 215 | 9.8203 | 0.0811 | | 9.8438 | 0.0118 | 216 | 9.8125 | 0.0812 | | 9.8359 | 0.0118 | 217 | 9.8125 | 0.0813 | | 9.8281 | 0.0119 | 218 | 9.8047 | 0.0814 | | 9.8281 | 0.0119 | 219 | 9.8047 | 0.0815 | | 9.8281 | 0.0120 | 220 | 9.8047 | 0.0815 | | 9.7969 | 0.0120 | 221 | 9.7969 | 0.0816 | | 9.8281 | 0.0121 | 222 | 9.7969 | 0.0816 | | 9.8047 | 0.0122 | 223 | 9.7891 | 0.0817 | | 9.8047 | 0.0122 | 224 | 9.7891 | 0.0818 | | 9.8047 | 0.0123 | 225 | 9.7891 | 0.0818 | | 9.8047 | 0.0123 | 226 | 9.7812 | 0.0819 | | 9.8281 | 0.0124 | 227 | 9.7812 | 0.0819 | | 9.7812 | 0.0124 | 228 | 9.7812 | 0.0819 | | 9.7891 | 0.0125 | 229 | 9.7734 | 0.0820 | | 9.7969 | 0.0125 | 230 | 9.7734 | 0.0821 | | 9.7578 | 0.0126 | 231 | 9.7656 | 0.0821 | | 9.8125 | 0.0126 | 232 | 9.7656 | 0.0822 | | 9.7734 | 0.0127 | 233 | 9.7656 | 0.0823 | | 9.7656 | 0.0128 | 234 | 9.7578 | 0.0823 | | 9.7578 | 0.0128 | 235 | 9.7578 | 0.0824 | | 9.7891 | 0.0129 | 236 | 9.7578 | 0.0824 | | 9.7812 | 0.0129 | 237 | 9.75 | 0.0824 | | 9.7656 | 0.0130 | 238 | 9.75 | 0.0825 | | 9.7969 | 0.0130 | 239 | 9.75 | 0.0825 | | 9.75 | 0.0131 | 240 | 9.7422 | 0.0825 | | 9.7734 | 0.0131 | 241 | 9.7422 | 0.0825 | | 9.7578 | 0.0132 | 242 | 9.7344 | 0.0825 | | 9.7656 | 0.0132 | 243 | 9.7344 | 0.0825 | | 9.7266 | 0.0133 | 244 | 9.7344 | 0.0826 | | 9.75 | 0.0134 | 245 | 9.7266 | 0.0826 | | 9.7422 | 0.0134 | 246 | 9.7266 | 0.0827 | | 9.75 | 0.0135 | 247 | 9.7266 | 0.0827 | | 9.7656 | 0.0135 | 248 | 9.7188 | 0.0828 | | 9.7266 | 0.0136 | 249 | 9.7188 | 0.0828 | | 9.75 | 0.0136 | 250 | 9.7109 | 0.0828 | | 9.7266 | 0.0137 | 251 | 9.7109 | 0.0829 | | 9.7266 | 0.0137 | 252 | 9.7109 | 0.0829 | | 9.7266 | 0.0138 | 253 | 9.7031 | 0.0829 | | 9.7266 | 0.0138 | 254 | 9.7031 | 0.0829 | | 9.7344 | 0.0139 | 255 | 9.7031 | 0.0829 | | 9.7109 | 0.0139 | 256 | 9.6953 | 0.0829 | | 9.7109 | 0.0140 | 257 | 9.6953 | 0.0829 | | 9.7109 | 0.0141 | 258 | 9.6953 | 0.0830 | | 9.7031 | 0.0141 | 259 | 9.6875 | 0.0830 | | 9.7109 | 0.0142 | 260 | 9.6875 | 0.0831 | | 9.6953 | 0.0142 | 261 | 9.6797 | 0.0832 | | 9.7031 | 0.0143 | 262 | 9.6797 | 0.0832 | | 9.6953 | 0.0143 | 263 | 9.6797 | 0.0832 | | 9.6875 | 0.0144 | 264 | 9.6719 | 0.0833 | | 9.6719 | 0.0144 | 265 | 9.6719 | 0.0833 | | 9.6797 | 0.0145 | 266 | 9.6719 | 0.0832 | | 9.7188 | 0.0145 | 267 | 9.6641 | 0.0833 | | 9.6953 | 0.0146 | 268 | 9.6641 | 0.0833 | | 9.6797 | 0.0147 | 269 | 9.6641 | 0.0833 | | 9.6719 | 0.0147 | 270 | 9.6562 | 0.0834 | | 9.6875 | 0.0148 | 271 | 9.6562 | 0.0834 | | 9.6641 | 0.0148 | 272 | 9.6484 | 0.0835 | | 9.6719 | 0.0149 | 273 | 9.6484 | 0.0836 | | 9.6719 | 0.0149 | 274 | 9.6484 | 0.0836 | | 9.6406 | 0.0150 | 275 | 9.6406 | 0.0837 | | 9.6641 | 0.0150 | 276 | 9.6406 | 0.0837 | | 9.6328 | 0.0151 | 277 | 9.6406 | 0.0838 | | 9.6328 | 0.0151 | 278 | 9.6328 | 0.0838 | | 9.6484 | 0.0152 | 279 | 9.6328 | 0.0838 | | 9.6484 | 0.0153 | 280 | 9.6328 | 0.0838 | | 9.6875 | 0.0153 | 281 | 9.625 | 0.0838 | | 9.6328 | 0.0154 | 282 | 9.625 | 0.0838 | | 9.6562 | 0.0154 | 283 | 9.6172 | 0.0838 | | 9.6719 | 0.0155 | 284 | 9.6172 | 0.0838 | | 9.6641 | 0.0155 | 285 | 9.6172 | 0.0838 | | 9.6328 | 0.0156 | 286 | 9.6094 | 0.0838 | | 9.6328 | 0.0156 | 287 | 9.6094 | 0.0839 | | 9.625 | 0.0157 | 288 | 9.6094 | 0.0839 | | 9.6328 | 0.0157 | 289 | 9.6016 | 0.0840 | | 9.6172 | 0.0158 | 290 | 9.6016 | 0.0840 | | 9.6172 | 0.0159 | 291 | 9.6016 | 0.0841 | | 9.6094 | 0.0159 | 292 | 9.5938 | 0.0841 | | 9.6172 | 0.0160 | 293 | 9.5938 | 0.0842 | | 9.6094 | 0.0160 | 294 | 9.5938 | 0.0842 | | 9.6328 | 0.0161 | 295 | 9.5859 | 0.0842 | | 9.5938 | 0.0161 | 296 | 9.5859 | 0.0842 | | 9.5938 | 0.0162 | 297 | 9.5781 | 0.0842 | | 9.6016 | 0.0162 | 298 | 9.5781 | 0.0842 | | 9.5781 | 0.0163 | 299 | 9.5781 | 0.0842 | | 9.5938 | 0.0163 | 300 | 9.5703 | 0.0843 | | 9.5938 | 0.0164 | 301 | 9.5703 | 0.0843 | | 9.6016 | 0.0165 | 302 | 9.5703 | 0.0844 | | 9.5781 | 0.0165 | 303 | 9.5625 | 0.0845 | | 9.6016 | 0.0166 | 304 | 9.5625 | 0.0845 | | 9.5703 | 0.0166 | 305 | 9.5625 | 0.0845 | | 9.5781 | 0.0167 | 306 | 9.5547 | 0.0845 | | 9.5938 | 0.0167 | 307 | 9.5547 | 0.0846 | | 9.5391 | 0.0168 | 308 | 9.5547 | 0.0846 | | 9.5625 | 0.0168 | 309 | 9.5469 | 0.0846 | | 9.5547 | 0.0169 | 310 | 9.5469 | 0.0846 | | 9.5703 | 0.0169 | 311 | 9.5469 | 0.0846 | | 9.5625 | 0.0170 | 312 | 9.5391 | 0.0846 | | 9.5469 | 0.0171 | 313 | 9.5391 | 0.0846 | | 9.5469 | 0.0171 | 314 | 9.5391 | 0.0846 | | 9.5391 | 0.0172 | 315 | 9.5312 | 0.0847 | | 9.5781 | 0.0172 | 316 | 9.5312 | 0.0847 | | 9.5469 | 0.0173 | 317 | 9.5312 | 0.0847 | | 9.5312 | 0.0173 | 318 | 9.5234 | 0.0848 | | 9.5703 | 0.0174 | 319 | 9.5234 | 0.0848 | | 9.5312 | 0.0174 | 320 | 9.5234 | 0.0848 | | 9.5703 | 0.0175 | 321 | 9.5156 | 0.0848 | | 9.5312 | 0.0175 | 322 | 9.5156 | 0.0849 | | 9.5391 | 0.0176 | 323 | 9.5078 | 0.0849 | | 9.5156 | 0.0177 | 324 | 9.5078 | 0.0849 | | 9.5234 | 0.0177 | 325 | 9.5078 | 0.0849 | | 9.5391 | 0.0178 | 326 | 9.5 | 0.0849 | | 9.5078 | 0.0178 | 327 | 9.5 | 0.0849 | | 9.5312 | 0.0179 | 328 | 9.5 | 0.0848 | | 9.5078 | 0.0179 | 329 | 9.4922 | 0.0848 | | 9.5234 | 0.0180 | 330 | 9.4922 | 0.0847 | | 9.5078 | 0.0180 | 331 | 9.4922 | 0.0848 | | 9.4922 | 0.0181 | 332 | 9.4844 | 0.0848 | | 9.5 | 0.0181 | 333 | 9.4844 | 0.0849 | | 9.5078 | 0.0182 | 334 | 9.4844 | 0.0850 | | 9.4766 | 0.0183 | 335 | 9.4766 | 0.0851 | | 9.5 | 0.0183 | 336 | 9.4766 | 0.0851 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.0a0+32f93b1 - Datasets 2.20.0 - Tokenizers 0.19.1
gokulsrinivasagan/gpt_train_12_256
gokulsrinivasagan
2024-07-02T16:11:14Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "dataset:gokuls/wiki_book_corpus_raw_dataset_tiny", "base_model:openai-community/gpt2", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-07-01T13:56:23Z
--- license: mit base_model: openai-community/gpt2 tags: - generated_from_trainer datasets: - gokuls/wiki_book_corpus_raw_dataset_tiny metrics: - accuracy model-index: - name: gpt_train_12_256 results: - task: name: Causal Language Modeling type: text-generation dataset: name: gokuls/wiki_book_corpus_raw_dataset_tiny type: gokuls/wiki_book_corpus_raw_dataset_tiny metrics: - name: Accuracy type: accuracy value: 0.08778952977191952 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt_train_12_256 This model is a fine-tuned version of [openai-community/gpt2](https://huggingface.co/openai-community/gpt2) on the gokuls/wiki_book_corpus_raw_dataset_tiny dataset. It achieves the following results on the evaluation set: - Loss: 9.6016 - Accuracy: 0.0878 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 36 - eval_batch_size: 36 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 10.875 | 0.0001 | 1 | 10.875 | 0.0031 | | 10.875 | 0.0001 | 2 | 10.875 | 0.0031 | | 10.875 | 0.0002 | 3 | 10.875 | 0.0031 | | 10.875 | 0.0002 | 4 | 10.875 | 0.0031 | | 10.8672 | 0.0003 | 5 | 10.875 | 0.0031 | | 10.875 | 0.0003 | 6 | 10.875 | 0.0031 | | 10.8672 | 0.0004 | 7 | 10.875 | 0.0031 | | 10.875 | 0.0004 | 8 | 10.875 | 0.0031 | | 10.875 | 0.0005 | 9 | 10.875 | 0.0031 | | 10.875 | 0.0005 | 10 | 10.875 | 0.0031 | | 10.875 | 0.0006 | 11 | 10.875 | 0.0031 | | 10.875 | 0.0007 | 12 | 10.875 | 0.0031 | | 10.875 | 0.0007 | 13 | 10.875 | 0.0031 | | 10.875 | 0.0008 | 14 | 10.875 | 0.0031 | | 10.875 | 0.0008 | 15 | 10.875 | 0.0031 | | 10.875 | 0.0009 | 16 | 10.875 | 0.0031 | | 10.8672 | 0.0009 | 17 | 10.875 | 0.0031 | | 10.875 | 0.0010 | 18 | 10.8047 | 0.0103 | | 10.8125 | 0.0010 | 19 | 10.75 | 0.0119 | | 10.7578 | 0.0011 | 20 | 10.6953 | 0.0180 | | 10.7188 | 0.0011 | 21 | 10.6562 | 0.0319 | | 10.6719 | 0.0012 | 22 | 10.625 | 0.0470 | | 10.6328 | 0.0013 | 23 | 10.5938 | 0.0530 | | 10.6172 | 0.0013 | 24 | 10.5703 | 0.0542 | | 10.5859 | 0.0014 | 25 | 10.5469 | 0.0543 | | 10.5547 | 0.0014 | 26 | 10.5312 | 0.0540 | | 10.5391 | 0.0015 | 27 | 10.5156 | 0.0534 | | 10.5547 | 0.0015 | 28 | 10.5 | 0.0531 | | 10.5156 | 0.0016 | 29 | 10.4844 | 0.0535 | | 10.4844 | 0.0016 | 30 | 10.4766 | 0.0542 | | 10.4844 | 0.0017 | 31 | 10.4609 | 0.0548 | | 10.4766 | 0.0017 | 32 | 10.4531 | 0.0551 | | 10.4766 | 0.0018 | 33 | 10.4453 | 0.0557 | | 10.4531 | 0.0019 | 34 | 10.4375 | 0.0565 | | 10.4453 | 0.0019 | 35 | 10.4297 | 0.0570 | | 10.4375 | 0.0020 | 36 | 10.4219 | 0.0575 | | 10.4375 | 0.0020 | 37 | 10.4141 | 0.0581 | | 10.4453 | 0.0021 | 38 | 10.4141 | 0.0583 | | 10.3984 | 0.0021 | 39 | 10.4062 | 0.0585 | | 10.4141 | 0.0022 | 40 | 10.3984 | 0.0586 | | 10.4062 | 0.0022 | 41 | 10.3906 | 0.0587 | | 10.3984 | 0.0023 | 42 | 10.3906 | 0.0587 | | 10.3906 | 0.0023 | 43 | 10.3828 | 0.0588 | | 10.4062 | 0.0024 | 44 | 10.375 | 0.0591 | | 10.375 | 0.0025 | 45 | 10.375 | 0.0592 | | 10.3984 | 0.0025 | 46 | 10.3672 | 0.0592 | | 10.3828 | 0.0026 | 47 | 10.3594 | 0.0593 | | 10.375 | 0.0026 | 48 | 10.3516 | 0.0597 | | 10.3594 | 0.0027 | 49 | 10.3516 | 0.0599 | | 10.3516 | 0.0027 | 50 | 10.3438 | 0.0602 | | 10.3438 | 0.0028 | 51 | 10.3359 | 0.0604 | | 10.3516 | 0.0028 | 52 | 10.3281 | 0.0606 | | 10.3594 | 0.0029 | 53 | 10.3281 | 0.0607 | | 10.3438 | 0.0029 | 54 | 10.3203 | 0.0608 | | 10.3281 | 0.0030 | 55 | 10.3125 | 0.0608 | | 10.3281 | 0.0031 | 56 | 10.3125 | 0.0607 | | 10.3281 | 0.0031 | 57 | 10.3047 | 0.0607 | | 10.3438 | 0.0032 | 58 | 10.3047 | 0.0607 | | 10.3125 | 0.0032 | 59 | 10.2969 | 0.0609 | | 10.3203 | 0.0033 | 60 | 10.2969 | 0.0612 | | 10.3125 | 0.0033 | 61 | 10.2891 | 0.0615 | | 10.2969 | 0.0034 | 62 | 10.2812 | 0.0618 | | 10.2891 | 0.0034 | 63 | 10.2812 | 0.0620 | | 10.2969 | 0.0035 | 64 | 10.2734 | 0.0622 | | 10.2891 | 0.0035 | 65 | 10.2734 | 0.0622 | | 10.2734 | 0.0036 | 66 | 10.2656 | 0.0623 | | 10.2656 | 0.0037 | 67 | 10.2656 | 0.0623 | | 10.2656 | 0.0037 | 68 | 10.2578 | 0.0623 | | 10.2578 | 0.0038 | 69 | 10.25 | 0.0622 | | 10.25 | 0.0038 | 70 | 10.25 | 0.0622 | | 10.2656 | 0.0039 | 71 | 10.2422 | 0.0623 | | 10.2344 | 0.0039 | 72 | 10.2422 | 0.0626 | | 10.2578 | 0.0040 | 73 | 10.2344 | 0.0629 | | 10.2266 | 0.0040 | 74 | 10.2344 | 0.0632 | | 10.2422 | 0.0041 | 75 | 10.2266 | 0.0633 | | 10.2656 | 0.0041 | 76 | 10.2266 | 0.0633 | | 10.2266 | 0.0042 | 77 | 10.2188 | 0.0632 | | 10.2422 | 0.0043 | 78 | 10.2188 | 0.0631 | | 10.2031 | 0.0043 | 79 | 10.2109 | 0.0630 | | 10.2031 | 0.0044 | 80 | 10.2109 | 0.0631 | | 10.2188 | 0.0044 | 81 | 10.2031 | 0.0633 | | 10.2188 | 0.0045 | 82 | 10.2031 | 0.0637 | | 10.2344 | 0.0045 | 83 | 10.1953 | 0.0641 | | 10.2188 | 0.0046 | 84 | 10.1953 | 0.0647 | | 10.2031 | 0.0046 | 85 | 10.1875 | 0.0653 | | 10.2266 | 0.0047 | 86 | 10.1875 | 0.0657 | | 10.2109 | 0.0047 | 87 | 10.1797 | 0.0660 | | 10.1641 | 0.0048 | 88 | 10.1797 | 0.0660 | | 10.1953 | 0.0048 | 89 | 10.1719 | 0.0660 | | 10.1875 | 0.0049 | 90 | 10.1719 | 0.0658 | | 10.2031 | 0.0050 | 91 | 10.1641 | 0.0658 | | 10.1719 | 0.0050 | 92 | 10.1641 | 0.0658 | | 10.1953 | 0.0051 | 93 | 10.1562 | 0.0660 | | 10.1641 | 0.0051 | 94 | 10.1562 | 0.0665 | | 10.1797 | 0.0052 | 95 | 10.1484 | 0.0673 | | 10.1797 | 0.0052 | 96 | 10.1484 | 0.0682 | | 10.1406 | 0.0053 | 97 | 10.1406 | 0.0690 | | 10.1562 | 0.0053 | 98 | 10.1406 | 0.0696 | | 10.1406 | 0.0054 | 99 | 10.1328 | 0.0699 | | 10.1641 | 0.0054 | 100 | 10.1328 | 0.0700 | | 10.1797 | 0.0055 | 101 | 10.125 | 0.0699 | | 10.1484 | 0.0056 | 102 | 10.125 | 0.0699 | | 10.1406 | 0.0056 | 103 | 10.1172 | 0.0701 | | 10.1328 | 0.0057 | 104 | 10.1172 | 0.0706 | | 10.0938 | 0.0057 | 105 | 10.1094 | 0.0712 | | 10.1016 | 0.0058 | 106 | 10.1094 | 0.0719 | | 10.1016 | 0.0058 | 107 | 10.1016 | 0.0725 | | 10.1094 | 0.0059 | 108 | 10.1016 | 0.0728 | | 10.1016 | 0.0059 | 109 | 10.1016 | 0.0729 | | 10.1016 | 0.0060 | 110 | 10.0938 | 0.0729 | | 10.0781 | 0.0060 | 111 | 10.0938 | 0.0728 | | 10.0938 | 0.0061 | 112 | 10.0859 | 0.0727 | | 10.1172 | 0.0062 | 113 | 10.0859 | 0.0725 | | 10.1016 | 0.0062 | 114 | 10.0781 | 0.0725 | | 10.0938 | 0.0063 | 115 | 10.0781 | 0.0726 | | 10.1016 | 0.0063 | 116 | 10.0703 | 0.0730 | | 10.0703 | 0.0064 | 117 | 10.0703 | 0.0733 | | 10.0938 | 0.0064 | 118 | 10.0625 | 0.0738 | | 10.0859 | 0.0065 | 119 | 10.0625 | 0.0742 | | 10.0781 | 0.0065 | 120 | 10.0625 | 0.0744 | | 10.0625 | 0.0066 | 121 | 10.0547 | 0.0745 | | 10.0547 | 0.0066 | 122 | 10.0547 | 0.0746 | | 10.0781 | 0.0067 | 123 | 10.0469 | 0.0746 | | 10.0625 | 0.0068 | 124 | 10.0469 | 0.0745 | | 10.0781 | 0.0068 | 125 | 10.0391 | 0.0745 | | 10.0781 | 0.0069 | 126 | 10.0391 | 0.0747 | | 10.0703 | 0.0069 | 127 | 10.0391 | 0.0752 | | 10.0547 | 0.0070 | 128 | 10.0312 | 0.0758 | | 10.0469 | 0.0070 | 129 | 10.0312 | 0.0762 | | 10.0391 | 0.0071 | 130 | 10.0234 | 0.0765 | | 10.0391 | 0.0071 | 131 | 10.0234 | 0.0765 | | 10.0469 | 0.0072 | 132 | 10.0156 | 0.0764 | | 10.0469 | 0.0072 | 133 | 10.0156 | 0.0761 | | 10.0234 | 0.0073 | 134 | 10.0156 | 0.0759 | | 10.0312 | 0.0074 | 135 | 10.0078 | 0.0757 | | 10.0312 | 0.0074 | 136 | 10.0078 | 0.0757 | | 10.0078 | 0.0075 | 137 | 10.0 | 0.0759 | | 10.0 | 0.0075 | 138 | 10.0 | 0.0763 | | 10.0078 | 0.0076 | 139 | 10.0 | 0.0768 | | 10.0234 | 0.0076 | 140 | 9.9922 | 0.0774 | | 9.9922 | 0.0077 | 141 | 9.9922 | 0.0779 | | 10.0234 | 0.0077 | 142 | 9.9844 | 0.0782 | | 9.9766 | 0.0078 | 143 | 9.9844 | 0.0783 | | 10.0156 | 0.0078 | 144 | 9.9844 | 0.0782 | | 9.9844 | 0.0079 | 145 | 9.9766 | 0.0780 | | 9.9922 | 0.0080 | 146 | 9.9766 | 0.0778 | | 9.9844 | 0.0080 | 147 | 9.9688 | 0.0776 | | 10.0 | 0.0081 | 148 | 9.9688 | 0.0775 | | 9.9766 | 0.0081 | 149 | 9.9688 | 0.0776 | | 9.9688 | 0.0082 | 150 | 9.9609 | 0.0778 | | 9.9844 | 0.0082 | 151 | 9.9609 | 0.0782 | | 9.9766 | 0.0083 | 152 | 9.9531 | 0.0785 | | 9.9766 | 0.0083 | 153 | 9.9531 | 0.0787 | | 9.9922 | 0.0084 | 154 | 9.9453 | 0.0787 | | 9.9688 | 0.0084 | 155 | 9.9453 | 0.0787 | | 9.9141 | 0.0085 | 156 | 9.9453 | 0.0785 | | 9.9453 | 0.0086 | 157 | 9.9375 | 0.0783 | | 9.9375 | 0.0086 | 158 | 9.9375 | 0.0782 | | 9.9453 | 0.0087 | 159 | 9.9375 | 0.0782 | | 9.9531 | 0.0087 | 160 | 9.9297 | 0.0784 | | 9.9297 | 0.0088 | 161 | 9.9297 | 0.0788 | | 9.9375 | 0.0088 | 162 | 9.9219 | 0.0793 | | 9.9219 | 0.0089 | 163 | 9.9219 | 0.0797 | | 9.9297 | 0.0089 | 164 | 9.9219 | 0.0799 | | 9.9219 | 0.0090 | 165 | 9.9141 | 0.0802 | | 9.9141 | 0.0090 | 166 | 9.9141 | 0.0801 | | 9.9141 | 0.0091 | 167 | 9.9062 | 0.0799 | | 9.9219 | 0.0092 | 168 | 9.9062 | 0.0797 | | 9.9062 | 0.0092 | 169 | 9.9062 | 0.0795 | | 9.9062 | 0.0093 | 170 | 9.8984 | 0.0795 | | 9.9062 | 0.0093 | 171 | 9.8984 | 0.0797 | | 9.9297 | 0.0094 | 172 | 9.8906 | 0.0800 | | 9.8984 | 0.0094 | 173 | 9.8906 | 0.0804 | | 9.875 | 0.0095 | 174 | 9.8906 | 0.0808 | | 9.8984 | 0.0095 | 175 | 9.8828 | 0.0810 | | 9.8828 | 0.0096 | 176 | 9.8828 | 0.0811 | | 9.8828 | 0.0096 | 177 | 9.8828 | 0.0811 | | 9.875 | 0.0097 | 178 | 9.875 | 0.0808 | | 9.8828 | 0.0098 | 179 | 9.875 | 0.0805 | | 9.8906 | 0.0098 | 180 | 9.8672 | 0.0803 | | 9.8594 | 0.0099 | 181 | 9.8672 | 0.0803 | | 9.8828 | 0.0099 | 182 | 9.8672 | 0.0804 | | 9.8906 | 0.0100 | 183 | 9.8594 | 0.0807 | | 9.8438 | 0.0100 | 184 | 9.8594 | 0.0809 | | 9.8672 | 0.0101 | 185 | 9.8516 | 0.0810 | | 9.8828 | 0.0101 | 186 | 9.8516 | 0.0811 | | 9.8828 | 0.0102 | 187 | 9.8516 | 0.0811 | | 9.8594 | 0.0102 | 188 | 9.8438 | 0.0811 | | 9.8672 | 0.0103 | 189 | 9.8438 | 0.0811 | | 9.8516 | 0.0104 | 190 | 9.8438 | 0.0812 | | 9.8281 | 0.0104 | 191 | 9.8359 | 0.0813 | | 9.8359 | 0.0105 | 192 | 9.8359 | 0.0816 | | 9.8359 | 0.0105 | 193 | 9.8281 | 0.0818 | | 9.8516 | 0.0106 | 194 | 9.8281 | 0.0819 | | 9.8125 | 0.0106 | 195 | 9.8281 | 0.0817 | | 9.8047 | 0.0107 | 196 | 9.8203 | 0.0815 | | 9.8203 | 0.0107 | 197 | 9.8203 | 0.0814 | | 9.8438 | 0.0108 | 198 | 9.8203 | 0.0814 | | 9.8281 | 0.0108 | 199 | 9.8125 | 0.0815 | | 9.8516 | 0.0109 | 200 | 9.8125 | 0.0819 | | 9.8125 | 0.0110 | 201 | 9.8047 | 0.0823 | | 9.7969 | 0.0110 | 202 | 9.8047 | 0.0826 | | 9.8359 | 0.0111 | 203 | 9.8047 | 0.0827 | | 9.8359 | 0.0111 | 204 | 9.7969 | 0.0828 | | 9.8281 | 0.0112 | 205 | 9.7969 | 0.0826 | | 9.8359 | 0.0112 | 206 | 9.7969 | 0.0824 | | 9.8125 | 0.0113 | 207 | 9.7891 | 0.0823 | | 9.8281 | 0.0113 | 208 | 9.7891 | 0.0824 | | 9.8203 | 0.0114 | 209 | 9.7812 | 0.0826 | | 9.7891 | 0.0114 | 210 | 9.7812 | 0.0826 | | 9.7734 | 0.0115 | 211 | 9.7812 | 0.0826 | | 9.7734 | 0.0116 | 212 | 9.7734 | 0.0830 | | 9.7969 | 0.0116 | 213 | 9.7734 | 0.0835 | | 9.7969 | 0.0117 | 214 | 9.7656 | 0.0840 | | 9.7656 | 0.0117 | 215 | 9.7656 | 0.0844 | | 9.7891 | 0.0118 | 216 | 9.7656 | 0.0844 | | 9.7812 | 0.0118 | 217 | 9.7578 | 0.0845 | | 9.7812 | 0.0119 | 218 | 9.7578 | 0.0844 | | 9.7891 | 0.0119 | 219 | 9.7578 | 0.0844 | | 9.7734 | 0.0120 | 220 | 9.75 | 0.0844 | | 9.75 | 0.0120 | 221 | 9.75 | 0.0844 | | 9.7578 | 0.0121 | 222 | 9.7422 | 0.0843 | | 9.7422 | 0.0122 | 223 | 9.7422 | 0.0842 | | 9.7578 | 0.0122 | 224 | 9.7422 | 0.0843 | | 9.7344 | 0.0123 | 225 | 9.7344 | 0.0845 | | 9.7578 | 0.0123 | 226 | 9.7344 | 0.0848 | | 9.7734 | 0.0124 | 227 | 9.7344 | 0.0851 | | 9.7266 | 0.0124 | 228 | 9.7266 | 0.0851 | | 9.7344 | 0.0125 | 229 | 9.7266 | 0.0849 | | 9.7344 | 0.0125 | 230 | 9.7266 | 0.0849 | | 9.6875 | 0.0126 | 231 | 9.7188 | 0.0850 | | 9.75 | 0.0126 | 232 | 9.7188 | 0.0854 | | 9.7188 | 0.0127 | 233 | 9.7109 | 0.0857 | | 9.7109 | 0.0128 | 234 | 9.7109 | 0.0860 | | 9.7031 | 0.0128 | 235 | 9.7109 | 0.0861 | | 9.7422 | 0.0129 | 236 | 9.7031 | 0.0861 | | 9.7266 | 0.0129 | 237 | 9.7031 | 0.0861 | | 9.7109 | 0.0130 | 238 | 9.7031 | 0.0858 | | 9.7422 | 0.0130 | 239 | 9.6953 | 0.0856 | | 9.6875 | 0.0131 | 240 | 9.6953 | 0.0854 | | 9.7109 | 0.0131 | 241 | 9.6953 | 0.0853 | | 9.6953 | 0.0132 | 242 | 9.6875 | 0.0853 | | 9.7109 | 0.0132 | 243 | 9.6875 | 0.0856 | | 9.6719 | 0.0133 | 244 | 9.6797 | 0.0859 | | 9.7109 | 0.0134 | 245 | 9.6797 | 0.0863 | | 9.6719 | 0.0134 | 246 | 9.6797 | 0.0866 | | 9.7109 | 0.0135 | 247 | 9.6719 | 0.0867 | | 9.7031 | 0.0135 | 248 | 9.6719 | 0.0866 | | 9.6641 | 0.0136 | 249 | 9.6719 | 0.0866 | | 9.6953 | 0.0136 | 250 | 9.6641 | 0.0866 | | 9.6641 | 0.0137 | 251 | 9.6641 | 0.0866 | | 9.6719 | 0.0137 | 252 | 9.6641 | 0.0868 | | 9.6719 | 0.0138 | 253 | 9.6562 | 0.0869 | | 9.6797 | 0.0138 | 254 | 9.6562 | 0.0870 | | 9.6797 | 0.0139 | 255 | 9.6484 | 0.0870 | | 9.6641 | 0.0139 | 256 | 9.6484 | 0.0870 | | 9.6562 | 0.0140 | 257 | 9.6484 | 0.0869 | | 9.6562 | 0.0141 | 258 | 9.6406 | 0.0867 | | 9.6562 | 0.0141 | 259 | 9.6406 | 0.0865 | | 9.6641 | 0.0142 | 260 | 9.6406 | 0.0866 | | 9.6406 | 0.0142 | 261 | 9.6328 | 0.0868 | | 9.6484 | 0.0143 | 262 | 9.6328 | 0.0871 | | 9.6484 | 0.0143 | 263 | 9.6328 | 0.0873 | | 9.6328 | 0.0144 | 264 | 9.625 | 0.0874 | | 9.625 | 0.0144 | 265 | 9.625 | 0.0875 | | 9.6328 | 0.0145 | 266 | 9.6172 | 0.0877 | | 9.6641 | 0.0145 | 267 | 9.6172 | 0.0877 | | 9.6484 | 0.0146 | 268 | 9.6172 | 0.0877 | | 9.6328 | 0.0147 | 269 | 9.6094 | 0.0877 | | 9.625 | 0.0147 | 270 | 9.6094 | 0.0875 | | 9.625 | 0.0148 | 271 | 9.6094 | 0.0875 | | 9.6094 | 0.0148 | 272 | 9.6016 | 0.0875 | | 9.6172 | 0.0149 | 273 | 9.6016 | 0.0877 | | 9.625 | 0.0149 | 274 | 9.6016 | 0.0878 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.0a0+32f93b1 - Datasets 2.20.0 - Tokenizers 0.19.1
murtuzaakhtari/results
murtuzaakhtari
2024-07-01T13:57:19Z
0
0
null
[ "region:us" ]
null
2024-07-01T13:57:19Z
Entry not found
DavidLanz/Llama3_tw_8B_btc_qlora
DavidLanz
2024-07-01T13:58:48Z
0
2
peft
[ "peft", "safetensors", "facebook", "meta", "pytorch", "llama", "llama-2", "text-generation", "en", "base_model:DavidLanz/Llama3-tw-8B-Instruct", "license:apache-2.0", "region:us" ]
text-generation
2024-07-01T13:57:31Z
--- language: - en license: apache-2.0 library_name: peft tags: - facebook - meta - pytorch - llama - llama-2 base_model: DavidLanz/Llama3-tw-8B-Instruct model_name: Llama 3 8B Instruct inference: false model_creator: Meta Llama 3 model_type: llama pipeline_tag: text-generation quantized_by: QLoRA --- # Model Card for Model ID This PEFT weight is for predicting BTC price. Disclaimer: This model is for a time series problem on LLM performance, and it's not for investment advice; any prediction results are not a basis for investment reference. ## Model Details Training data source: BTC/USD provided by [Binance](https://www.binance.com/). ### Model Description This repo contains QLoRA format model files for [Meta's Llama 3 8B tw Instruct](https://huggingface.co/DavidLanz/Llama3-tw-8B-Instruct). ## Uses ```python import torch from peft import LoraConfig, PeftModel from transformers import ( AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, HfArgumentParser, TrainingArguments, TextStreamer, pipeline, logging, ) device_map = {"": 0} use_4bit = True bnb_4bit_compute_dtype = "float16" bnb_4bit_quant_type = "nf4" use_nested_quant = False compute_dtype = getattr(torch, bnb_4bit_compute_dtype) bnb_config = BitsAndBytesConfig( load_in_4bit=use_4bit, bnb_4bit_quant_type=bnb_4bit_quant_type, bnb_4bit_compute_dtype=compute_dtype, bnb_4bit_use_double_quant=use_nested_quant, ) based_model_path = "DavidLanz/Llama3-tw-8B-Instruct" adapter_path = "DavidLanz/Llama3_tw_8B_btc_qlora" base_model = AutoModelForCausalLM.from_pretrained( based_model_path, low_cpu_mem_usage=True, return_dict=True, quantization_config=bnb_config, torch_dtype=torch.float16, device_map=device_map, ) model = PeftModel.from_pretrained(base_model, adapter_path) tokenizer = AutoTokenizer.from_pretrained(based_model_path, trust_remote_code=True) import torch from transformers import pipeline, TextStreamer text_gen_pipeline = pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.bfloat16}, tokenizer=tokenizer, ) messages = [ { "role": "system", "content": "δ½ ζ˜―δΈ€δ½ε°ˆζ₯­ηš„BTCθ™›ζ“¬θ²¨εΉ£εˆ†ζžεΈ«", }, {"role": "user", "content": "昨ζ—₯開盀價為64437.18οΌŒζœ€ι«˜εƒΉη‚Ί64960.37οΌŒζœ€δ½ŽεƒΉη‚Ί62953.90οΌŒζ”Άη›€εƒΉη‚Ί64808.35οΌŒδΊ€ζ˜“ι‡η‚Ί808273.27γ€‚θ«‹ι ζΈ¬δ»Šζ—₯BTCηš„ζ”Άη›€εƒΉ?"}, ] prompt = text_gen_pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ text_gen_pipeline.tokenizer.eos_token_id, text_gen_pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = text_gen_pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` ### Framework versions - PEFT 0.11.1
CennetOguz/cooking_blip2_2
CennetOguz
2024-07-01T18:13:58Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-07-01T13:58:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
itay-nakash/model_387dff9370_sweep_skilled-waterfall-1163
itay-nakash
2024-07-01T13:58:36Z
0
0
null
[ "region:us" ]
null
2024-07-01T13:58:36Z
Entry not found
ikedachin/codeparrot-small
ikedachin
2024-07-01T14:00:17Z
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-07-01T13:59:49Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
pbisht/budtenderai
pbisht
2024-07-01T14:25:21Z
0
0
transformers
[ "transformers", "safetensors", "autotrain", "text-generation-inference", "text-generation", "peft", "conversational", "dataset:pbisht/train_ha.csv", "base_model:UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3", "license:other", "endpoints_compatible", "region:us" ]
text-generation
2024-07-01T14:00:55Z
--- tags: - autotrain - text-generation-inference - text-generation - peft library_name: transformers base_model: UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3 widget: - messages: - role: user content: What is your favorite condiment? license: other datasets: - pbisht/train_ha.csv --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
aaalby/Isa
aaalby
2024-07-01T14:01:43Z
0
0
null
[ "license:openrail", "region:us" ]
null
2024-07-01T14:01:02Z
--- license: openrail ---
chjoo7/kicon_mixtral87_qlora_v3
chjoo7
2024-07-01T14:03:02Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-07-01T14:01:07Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
summertime0/nashk9
summertime0
2024-07-01T16:00:55Z
0
0
null
[ "region:us" ]
null
2024-07-01T14:01:19Z
Entry not found
fiveflow/orpo-gemma
fiveflow
2024-07-01T14:01:39Z
0
0
null
[ "region:us" ]
null
2024-07-01T14:01:39Z
Entry not found
ClementineBleuze/roberta_prefix_cont_ll_SEP
ClementineBleuze
2024-07-02T07:12:54Z
0
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-07-01T14:02:09Z
--- license: mit base_model: FacebookAI/roberta-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta_prefix_cont_ll_SEP results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta_prefix_cont_ll_SEP This model is a fine-tuned version of [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0923 - F1 Weighted: 0.8912 - F1 Samples: 0.8996 - F1 Macro: 0.7665 - F1 Micro: 0.8944 - Accuracy: 0.8708 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Accuracy | F1 Macro | F1 Micro | F1 Samples | F1 Weighted | Validation Loss | |:-------------:|:------:|:----:|:--------:|:--------:|:--------:|:----------:|:-----------:|:---------------:| | 0.2635 | 0.3381 | 500 | 0.7165 | 0.3985 | 0.7662 | 0.7362 | 0.7323 | 0.1760 | | 0.164 | 0.6761 | 1000 | 0.7855 | 0.6106 | 0.8291 | 0.8080 | 0.8107 | 0.1356 | | 0.1401 | 1.0142 | 1500 | 0.8024 | 0.6610 | 0.8398 | 0.8270 | 0.8268 | 0.1214 | | 0.1225 | 1.3523 | 2000 | 0.7916 | 0.6825 | 0.8334 | 0.8186 | 0.8256 | 0.1242 | | 0.1116 | 1.6903 | 2500 | 0.8227 | 0.7166 | 0.8575 | 0.8541 | 0.8531 | 0.1112 | | 0.1058 | 2.0284 | 3000 | 0.8180 | 0.7133 | 0.8528 | 0.8501 | 0.8489 | 0.1147 | | 0.0828 | 2.3665 | 3500 | 0.8315 | 0.7210 | 0.8650 | 0.8601 | 0.8594 | 0.1070 | | 0.0857 | 2.7045 | 4000 | 0.8403 | 0.7118 | 0.8683 | 0.8672 | 0.8611 | 0.1052 | | 0.0802 | 3.0426 | 4500 | 0.8566 | 0.7411 | 0.8849 | 0.8851 | 0.8785 | 0.0954 | | 0.0636 | 3.3807 | 5000 | 0.0955 | 0.8775 | 0.8868 | 0.7236 | 0.8850 | 0.8613 | | 0.0629 | 3.7187 | 5500 | 0.0982 | 0.8830 | 0.8911 | 0.7424 | 0.8881 | 0.8586 | | 0.0606 | 4.0568 | 6000 | 0.0990 | 0.8805 | 0.8894 | 0.7428 | 0.8873 | 0.8620 | | 0.0466 | 4.3949 | 6500 | 0.0923 | 0.8912 | 0.8996 | 0.7665 | 0.8944 | 0.8708 | | 0.0465 | 4.7329 | 7000 | 0.0966 | 0.8876 | 0.8969 | 0.7602 | 0.8914 | 0.8694 | | 0.0459 | 5.0710 | 7500 | 0.0973 | 0.8896 | 0.8965 | 0.7535 | 0.8951 | 0.8701 | | 0.0374 | 5.4091 | 8000 | 0.0992 | 0.8868 | 0.8923 | 0.7876 | 0.8877 | 0.8647 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.1+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
srinivasan-sridhar28/Tiny-Storyteller
srinivasan-sridhar28
2024-07-01T14:02:15Z
0
0
null
[ "region:us" ]
null
2024-07-01T14:02:15Z
Entry not found
fiveflow/orpo_gemma
fiveflow
2024-07-01T14:02:30Z
0
0
null
[ "region:us" ]
null
2024-07-01T14:02:30Z
Entry not found
jerryyun/kicon_mixtral87_qlora_v3
jerryyun
2024-07-01T14:04:39Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-07-01T14:03:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
apersonnaz/crystalDetect_bin_uv_512_20240701-160327
apersonnaz
2024-07-01T15:32:46Z
0
0
null
[ "safetensors", "region:us" ]
null
2024-07-01T14:03:30Z
Entry not found
apersonnaz/crystalDetect_bin_vis_512_20240701-160327
apersonnaz
2024-07-01T15:52:39Z
0
0
null
[ "safetensors", "region:us" ]
null
2024-07-01T14:03:30Z
Entry not found
aatreyajha/self_trained_distilbert
aatreyajha
2024-07-01T14:04:15Z
0
0
null
[ "region:us" ]
null
2024-07-01T14:04:15Z
Entry not found
GraydientPlatformAPI/dreamweaver25
GraydientPlatformAPI
2024-07-01T15:00:40Z
0
0
diffusers
[ "diffusers", "safetensors", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-07-01T14:06:00Z
Entry not found
Chairles-alex/autotrain-mistral-small
Chairles-alex
2024-07-01T14:07:25Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "autotrain", "text-generation-inference", "text-generation", "peft", "conversational", "base_model:mistralai/Mistral-7B-Instruct-v0.3", "license:other", "endpoints_compatible", "region:us" ]
text-generation
2024-07-01T14:07:00Z
--- tags: - autotrain - text-generation-inference - text-generation - peft library_name: transformers base_model: mistralai/Mistral-7B-Instruct-v0.3 widget: - messages: - role: user content: What is your favorite condiment? license: other --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
Rajesh939/q-FrozenLake-v1-4x4-noSlippery
Rajesh939
2024-07-01T14:07:26Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-07-01T14:07:22Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Rajesh939/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
GraydientPlatformAPI/satpony-xl
GraydientPlatformAPI
2024-07-01T14:31:22Z
0
0
diffusers
[ "diffusers", "safetensors", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-07-01T14:08:58Z
Entry not found
Rajesh939/MaximusTaxi
Rajesh939
2024-07-01T14:09:22Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-07-01T14:09:20Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: MaximusTaxi results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Rajesh939/MaximusTaxi", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
chjoo7/kicon_mixtral87_qlora_merged_v3
chjoo7
2024-07-01T14:15:29Z
0
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-07-01T14:10:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
GraydientPlatformAPI/pixel-ahusaky
GraydientPlatformAPI
2024-07-01T14:31:36Z
0
0
diffusers
[ "diffusers", "safetensors", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-07-01T14:10:13Z
Entry not found
Zoya/igor_crafter_llm
Zoya
2024-07-01T14:12:42Z
0
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
2024-07-01T14:11:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dhruvvaidh/cover-letter-gen-llama2
dhruvvaidh
2024-07-01T14:13:22Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-07-01T14:12:23Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Franzin/bigbird-roberta-base-goemotions-ekman-multilabel
Franzin
2024-07-01T14:13:55Z
0
0
transformers
[ "transformers", "safetensors", "big_bird", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-07-01T14:13:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
YulinWangThu/zephyr-7b-dpo-full
YulinWangThu
2024-07-01T14:14:11Z
0
0
null
[ "region:us" ]
null
2024-07-01T14:14:11Z
Entry not found
fiveflow/npo
fiveflow
2024-07-01T14:21:03Z
0
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-07-01T14:16:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
heziiiii/hydit22
heziiiii
2024-07-02T12:59:22Z
0
0
null
[ "region:us" ]
null
2024-07-01T14:16:49Z
Entry not found
herronej/v1-finetuned
herronej
2024-07-01T14:39:30Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-07-01T14:21:10Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
itay-nakash/model_387dff9370_sweep_twilight-rain-1164
itay-nakash
2024-07-01T14:27:25Z
0
0
null
[ "region:us" ]
null
2024-07-01T14:27:25Z
Entry not found
AliElshabory/whisper-small-hi
AliElshabory
2024-07-01T14:27:38Z
0
0
null
[ "region:us" ]
null
2024-07-01T14:27:37Z
Entry not found
Devops-hestabit/mixtral-instruct-trt-quant
Devops-hestabit
2024-07-01T14:45:29Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-07-01T14:27:46Z
--- license: apache-2.0 ---
habulaj/4351935627
habulaj
2024-07-01T14:28:33Z
0
0
null
[ "region:us" ]
null
2024-07-01T14:28:28Z
Entry not found
aprilcui11/llama-3-8b-chat-doctor
aprilcui11
2024-07-01T17:10:21Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-07-01T14:30:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
habulaj/1666785573
habulaj
2024-07-01T14:30:12Z
0
0
null
[ "region:us" ]
null
2024-07-01T14:30:09Z
Entry not found
zhhp1314520/gemma-2-9b
zhhp1314520
2024-07-01T14:31:04Z
0
0
null
[ "license:gemma", "region:us" ]
null
2024-07-01T14:31:04Z
--- license: gemma ---
mayarmostafa/videomae-base-finetuned-bleeding-exp_0
mayarmostafa
2024-07-02T16:52:36Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "videomae", "video-classification", "generated_from_trainer", "base_model:MCG-NJU/videomae-base", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
2024-07-01T14:32:05Z
--- license: cc-by-nc-4.0 base_model: MCG-NJU/videomae-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: videomae-base-finetuned-bleeding-exp_0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-finetuned-bleeding-exp_0 This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4958 - Accuracy: 0.5 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.04 | 2 | 0.6967 | 0.5 | | No log | 1.04 | 4 | 0.6799 | 0.75 | | No log | 2.04 | 6 | 0.6721 | 0.75 | | No log | 3.04 | 8 | 0.6742 | 0.75 | | 0.6424 | 4.04 | 10 | 0.6927 | 0.25 | | 0.6424 | 5.04 | 12 | 0.7295 | 0.5 | | 0.6424 | 6.04 | 14 | 0.8047 | 0.5 | | 0.6424 | 7.04 | 16 | 0.8589 | 0.5 | | 0.6424 | 8.04 | 18 | 0.8842 | 0.5 | | 0.6123 | 9.04 | 20 | 0.9349 | 0.5 | | 0.6123 | 10.04 | 22 | 0.9543 | 0.5 | | 0.6123 | 11.04 | 24 | 0.9924 | 0.5 | | 0.6123 | 12.04 | 26 | 1.0729 | 0.5 | | 0.6123 | 13.04 | 28 | 1.2268 | 0.5 | | 0.3641 | 14.04 | 30 | 1.3759 | 0.5 | | 0.3641 | 15.04 | 32 | 1.4344 | 0.5 | | 0.3641 | 16.04 | 34 | 1.4563 | 0.5 | | 0.3641 | 17.04 | 36 | 1.4365 | 0.5 | | 0.3641 | 18.04 | 38 | 1.4343 | 0.5 | | 0.4378 | 19.04 | 40 | 1.4375 | 0.5 | | 0.4378 | 20.04 | 42 | 1.4530 | 0.5 | | 0.4378 | 21.04 | 44 | 1.4732 | 0.5 | | 0.4378 | 22.04 | 46 | 1.4877 | 0.5 | | 0.4378 | 23.04 | 48 | 1.4919 | 0.5 | | 0.222 | 24.04 | 50 | 1.4958 | 0.5 | ### Framework versions - Transformers 4.40.2 - Pytorch 1.12.0 - Datasets 2.19.1 - Tokenizers 0.19.1
sameeahameed/llama-3-model_lora_model_LMD_updates
sameeahameed
2024-07-01T14:33:24Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-07-01T14:33:14Z
--- base_model: unsloth/llama-3-8b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** sameeahameed - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
sameeahameed/llama-3-model_lora_model_LMD_updated
sameeahameed
2024-07-01T14:33:26Z
0
0
transformers
[ "transformers", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-07-01T14:33:24Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PrabhakarVenkat/Stock-Analysis_with_CrewAI
PrabhakarVenkat
2024-07-01T14:38:42Z
0
0
null
[ "region:us" ]
null
2024-07-01T14:33:34Z
Entry not found
ohjimin/hscode
ohjimin
2024-07-01T14:41:28Z
0
0
null
[ "safetensors", "license:unlicense", "region:us" ]
null
2024-07-01T14:35:56Z
--- license: unlicense ---
GraydientPlatformAPI/wai-simpleuse
GraydientPlatformAPI
2024-07-01T15:00:53Z
0
0
diffusers
[ "diffusers", "safetensors", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-07-01T14:40:22Z
Entry not found
DeusImperator/sunfall-midnight-miqu-v0.2-v1.5-70B_exl2_2.4bpw_rpcal_mk2
DeusImperator
2024-07-01T15:29:49Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "not-for-all-audiences", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "exl2", "region:us" ]
text-generation
2024-07-01T14:41:59Z
--- library_name: transformers tags: - not-for-all-audiences --- # sunfall-midnight-miqu-v0.2-v1.5-70B - EXL2 2.4bpw rpcal_mk2 This is a 2.4bpw EXL2 quant of [crestf411/sunfall-midnight-miqu-v0.2-v1.5-70B](https://huggingface.co/crestf411/sunfall-midnight-miqu-v0.2-v1.5-70B) This quant was made using exllamav2-0.1.6 with [Bluemoon-light dataset](https://huggingface.co/datasets/ParasiticRogue/Bluemoon-Light) for RP. This quant fits 25k context on 24GB VRAM on Windows in my local testing (with exl2 Q4 cache), you might be able to get more depending on other things taking VRAM. I tested this quant shortly in some random RPs (including ones over 8k and 20k context) and it seems to work fine. ## Prompt Templates I used Vicuna version of calibration dataset, so probably Vicuna will be best here. ### Original readme below --- Sunfall (2024-06-07) dataset trained directly on top of https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.5 Beware, depraved. Not suitable for any audience. Experimental. Please give feedback. Begone if you demand perfection. This is still an early stage experiment. *Recommend a decently high temperature. Start with temp 1.7, smoothing factor 0.3.* To use lore book tags, make sure you use **Status: Blue (constant)** and write e.g. ``` Follow the Diamond Law at all costs. Tags: humor, dark, complex storytelling, intricate characters, immersive. ``` This model has been trained on context that mimics that of Silly Tavern's Mistral preset, with the following settings: **System Prompt:** ``` You are an expert actor that can fully immerse yourself into any role given. You do not break character for any reason. Currently your role is {{char}}, which is described in detail below. As {{char}}, continue the exchange with {{user}}. ``` Below method still works, but the lore book approach above is more convenient: **System Same as User Enabled** (This is the default) **Author's Note** (In-chat @ Depth 4) ``` Follow The Diamond Law at all costs. ``` Below method still works, but unless you want to write tags for a specific character card only, the lore book approach above is more convenient: **Scenario Information** (open a character card and press "Advanced Definitions") may also contain tags at the end to guide the model further. E.g.: ``` Two friends having fun. Set in 1947. Tags: dark, exploration, friendship, self-discovery, historical fiction ``` The card has also been trained on content which includes a narrator card, which was used when the content did not mainly revolve around two characters. Future versions will expand on this idea, so forgive the vagueness at this time. (The Diamond Law is this: https://files.catbox.moe/d15m3g.txt -- So far results are unclear, but the training was done with this phrase included, and the training data adheres to the law.) The model has also been trained to do storywriting, both interactively with the user and on its own. The system message ends up looking something like this: ``` You are an expert storyteller, who can roleplay or write compelling stories. Follow the Diamond Law. Below is a scenario with character descriptions and content tags. Write a story together with the user based on this scenario. Scenario: The story is about James, blabla. James is an overweight 63 year old blabla. Lucy: James's 62 year old wife. Tags: tag1, tag2, tag3, ... ``` If you remove the "together with the user" part, the model will be more inclined to write on its own.
yuchuantian/IPG
yuchuantian
2024-07-01T14:51:43Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-07-01T14:42:00Z
--- license: apache-2.0 ---
niccosala/Llama-3-8B-sft-lora-ultrachat
niccosala
2024-07-01T14:42:33Z
0
0
null
[ "region:us" ]
null
2024-07-01T14:42:33Z
Entry not found
Moriacrafter/Qwen1.5-1.8B-4bit_DepressionDetection
Moriacrafter
2024-07-01T14:44:25Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-07-01T14:43:23Z
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]