Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
sequencelengths
1
1.84k
sha
null
created_at
stringlengths
25
25
null
null
{}
St4n/xls-r-300m-sv-new
null
[ "region:us" ]
null
2024-05-03T13:57:06+00:00
reinforcement-learning
null
# **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Sweety07/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]}
Sweety07/q-FrozenLake-v1-4x4-noSlippery
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-05-03T13:57:26+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Pho4B-ft This model is a fine-tuned version of [vinai/PhoGPT-4B](https://huggingface.co/vinai/PhoGPT-4B) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 38.3770 - Rewards/chosen: -3.5500 - Rewards/rejected: -3.5422 - Rewards/accuracies: 0.4533 - Rewards/margins: -0.0077 - Logps/rejected: -35.4224 - Logps/chosen: -35.4996 - Logits/rejected: 0.8757 - Logits/chosen: 0.8807 - Nll Loss: 38.3031 - Log Odds Ratio: -0.8538 - Log Odds Chosen: -0.0772 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-06 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Nll Loss | Log Odds Ratio | Log Odds Chosen | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|:--------:|:--------------:|:---------------:| | 38.4131 | 0.5689 | 32 | 38.3770 | -3.5500 | -3.5422 | 0.4533 | -0.0077 | -35.4224 | -35.4996 | 0.8757 | 0.8807 | 38.3031 | -0.8538 | -0.0772 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"library_name": "peft", "tags": ["trl", "orpo", "generated_from_trainer"], "base_model": "vinai/PhoGPT-4B", "model-index": [{"name": "Pho4B-ft", "results": []}]}
iamnguyen/Pho4B-ft
null
[ "peft", "safetensors", "mpt", "trl", "orpo", "generated_from_trainer", "custom_code", "base_model:vinai/PhoGPT-4B", "region:us" ]
null
2024-05-03T13:57:36+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
vc64/llama2-7b_combinedQA
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-03T13:57:47+00:00
reinforcement-learning
ml-agents
# **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: pietroorlandi/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
{"library_name": "ml-agents", "tags": ["SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos"]}
pietroorlandi/poca-SoccerTwos
null
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
null
2024-05-03T13:59:38+00:00
null
null
{}
yxxshin/Icon-like-SD
null
[ "region:us" ]
null
2024-05-03T14:00:37+00:00
null
null
{}
scite/mxbai-rerank-xsmall-v1-onnx-optimized
null
[ "region:us" ]
null
2024-05-03T14:00:50+00:00
reinforcement-learning
null
# **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Dat1710/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-Taxi-v3", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.56 +/- 2.71", "name": "mean_reward", "verified": false}]}]}]}
Dat1710/q-Taxi-v3
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-05-03T14:01:14+00:00
reinforcement-learning
null
# **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Sweety07/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "Taxi-v3", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.56 +/- 2.71", "name": "mean_reward", "verified": false}]}]}]}
Sweety07/Taxi-v3
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-05-03T14:01:15+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
julep-ai/dolphin-2.9-llama3-70b-awq
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-05-03T14:01:49+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
cilantro9246/k069igm
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T14:02:03+00:00
text-generation
transformers
# Mistral-7b-Instruct-v0.1-int8-ov * Model creator: [Mistral AI](https://huggingface.co/mistralai) * Original model: [Mistral-7b-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) ## Description This is [Mistral-7b-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) model converted to the [OpenVINO™ IR](https://docs.openvino.ai/2024/documentation/openvino-ir-format.html) (Intermediate Representation) format with weights compressed to INT8 by [NNCF](https://github.com/openvinotoolkit/nncf). ## Quantization Parameters Weight compression was performed using `nncf.compress_weights` with the following parameters: * mode: **INT8_ASYM** For more information on quantization, check the [OpenVINO model optimization guide](https://docs.openvino.ai/2024/openvino-workflow/model-optimization-guide/weight-compression.html) ## Compatibility The provided OpenVINO™ IR model is compatible with: * OpenVINO version 2024.1.0 and higher * Optimum Intel 1.16.0 and higher ## Running Model Inference 1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend: ``` pip install optimum[openvino] ``` 2. Run model inference: ``` from transformers import AutoTokenizer from optimum.intel.openvino import OVModelForCausalLM model_id = "OpenVINO/mistral-7b-instrcut-v0.1-int8-ov" tokenizer = AutoTokenizer.from_pretrained(model_id) model = OVModelForCausalLM.from_pretrained(model_id) messages = [ {"role": "user", "content": "What is your favourite condiment?"}, {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"}, {"role": "user", "content": "Do you have mayonnaise recipes?"} ] inputs = tokenizer.apply_chat_template(messages, return_tensors="pt") outputs = model.generate(inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` For more examples and possible optimizations, refer to the [OpenVINO Large Language Model Inference Guide](https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide.html). ## Limitations Check the original model card for [limitations](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1#limitations). ## Legal information The original model is distributed under [Apache 2.0](https://choosealicense.com/licenses/apache-2.0/) license. More details can be found in [original model card](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1).
{"language": ["en"], "license": "apache-2.0"}
OpenVINO/mistral-7b-instrcut-v0.1-int4-ov
null
[ "transformers", "openvino", "mistral", "text-generation", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T14:02:05+00:00
null
null
Peptit complex Krem nedir? Peptit complex Fiyat, cildin görünümünü canlandırmak ve yenilemek için tasarlanmış son teknoloji ürünü yaşlanma karşıtı bir serumdur. Güçlü formülü, kollajen üretimini uyarma, cilt elastikiyetini artırma ve yaşlanma belirtilerini azaltma yetenekleriyle bilinen amino asit bileşikleri olan peptidlerin gücünden yararlanır. Düzenli kullanımla Peptit complex Yorumlar, cilde gençlik ışıltısını geri kazandırmayı ve genel cilt sağlığını geliştirmeyi amaçlar. Resmi internet sitesi:<a href="https://www.nutritionsee.com/peptompurke">www.Peptitcomplex.com</a> <p><a href="https://www.nutritionsee.com/peptompurke"> <img src="https://www.nutritionsee.com/wp-content/uploads/2024/05/Peptit-complex-turkey-1.png" alt="enter image description here"> </a></p> <a href="https://www.nutritionsee.com/peptompurke">Şimdi al!! Daha fazla bilgi için aşağıdaki bağlantıya tıklayın ve hemen %50 indirimden yararlanın... Acele edin</a> Resmi internet sitesi:<a href="https://www.nutritionsee.com/peptompurke">www.Peptitcomplex.com</a>
{"license": "apache-2.0"}
Peptitcomplex/Peptitcomplex
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-03T14:03:07+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
golf2248/8aj1lky
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T14:04:33+00:00
automatic-speech-recognition
transformers
{}
hajar817/whisper_TA_base_fa_cv13
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2024-05-03T14:04:56+00:00
text-classification
transformers
# HistoroBERTa-SuicideIncidentClassifier A binary classifier based on the RoBERTa-base architecture, fine-tuned on [historical British newspaper articles](https://huggingface.co/datasets/npedrazzini/hist_suicide_incident) to discern whether news reports discuss (confirmed or speculated) suicide cases, investigations, or court cases related to suicides. It attempts to differentiate between texts where _suicide_(_s_); or _suicidal_ is used in the context of actual incidents and those where these terms appear figuratively or in broader, non-specific discussions (e.g., mention of the number of suicides in the context of vital statistics; philosophical discussions around the morality of suicide at an abstract level; etc.). # Overview - **Model Name:** HistoroBERTa-SuicideIncidentClassifier - **Task**: Binary Classification - **Labels**: ['Incident', 'Not Incident'] - **Base Model:** [RoBERTa (A Robustly Optimized BERT Pretraining Approach) base model](https://huggingface.co/FacebookAI/roberta-base) - **Language:** 19th-century English (1780-1920) - **Developed by:** [Nilo Pedrazzini](https://huggingface.co/npedrazzini), [Daniel CS Wilson](https://huggingface.co/dcsw2) # Input Format A `str`-type input. # Output Format The predicted label (`Incident` or `Not Incident`), with the confidence score for each labels. # Examples ### Example 1: **Input:** ``` On Wednesday evening an inquest was held at the Stag and Pheasant before Major Taylor, coroner, and a jury, of whom Mr. Joel Casson was foreman, on the body of John William Birks, grocer, of 23, Huddersfield Road, who cut his throat on Tuesday evening. ``` **Output:** ``` { 'Incident': 0.974, 'Not Incident': 0.026 } ``` ### Example 2: **Input:** ``` The death-rate by accidents among colliers is, at least, from six to seven times as great as the death-rate from violence among the whole population, including suicides homicides, and the dangerous occupations. ``` **Output:** ``` { 'Not Incident': 0.577, 'Incident': 0.423 } ``` # Uses The classifier can be used, for instance, to obtain larger datasets reporting on cases of suicide in historical digitized newspapers, to then carry out larger-scale analyses on the language used in the reports. # Bias, Risks, and Limitations The classifier was trained on digitized newspaper data containing many OCR errors and, while text segmentation was meant to capture individual news articles, each labeled item in the training dataset very often spans multiple articles. This will necessarily have introduced bias in the model because of the extra content unrelated to reporting on suicide. &#9888; **NB**: We did not carry out a systematic evaluation of the effect of bad news article segmentation on the quality of the classifier. # Training Details This model was released upon comparison with other runs, and its selection was based on its accuracy on the evaluation set. Models based on RoBERTa were also compared to those based on [bert_1760_1900](https://huggingface.co/Livingwithmachines/bert_1760_1900), which achieved a slightly lower performance despite hyperparameter tuning. In the following report, the model in this repository corresponds to the one labeled `roberta-7`, specifically the output of epoch 4, which returned the highest accuracy (>0.96). <img src="https://cdn-uploads.huggingface.co/production/uploads/6342a31d5b97f509388807f3/KXqMD4Pchpmkee5CMFFYb.png" style="width: 90%;" /> ## Training Data https://huggingface.co/datasets/npedrazzini/hist_suicide_incident # Model Card Authors Nilo Pedrazzini # Model Card Contact [email protected] # How to use the model Use the code below to get started with the model. Import and load the model: ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer model_name = "npedrazzini/HistoroBERTa-SuicideIncidentClassifier" model = AutoModelForSequenceClassification.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` Generate prediction: ```python input_text = "The death-rate by accidents among colliers is, at least, from six to seven times as great as the death-rate from violence among the whole population, including suicides homicides, and the dangerous occupations.." inputs = tokenizer(input_text, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits probabilities = logits.softmax(dim=-1) ``` Print predicted label: ```python predicted_label_id = probabilities.argmax().item() predicted_label = model.config.id2label[predicted_label_id] print(predicted_label) ``` Output: ``` NotIncident ``` Print probability of each label: ```python label_probabilities = {label: prob for label, prob in zip(model.config.id2label.values(), probabilities.squeeze().tolist())} label_probabilities_sorted = dict(sorted(label_probabilities.items(), key=lambda item: item[1], reverse=True)) print(label_probabilities_sorted) ``` Output: ``` {'NotIncident': 0.5880260467529297, 'Incident': 0.4119739532470703} ```
{"language": ["en"], "license": "mit", "tags": ["roberta-based", "historical newspaper", "late modern english", "text classification", "not-for-all-audiences"], "datasets": ["npedrazzini/hist_suicide_incident"], "metrics": ["accuracy"], "pipeline_tag": "text-classification", "widget": [{"text": "On Wednesday evening an inquest was held at the Stag and Pheasant before Major Taylor, coroner, and a jury, of whom Mr. Joel Casson was foreman, on the body of John William Birks, grocer, of 23, Huddersfield Road, who cut his throat on Tuesday evening.", "example_title": "Example 1"}, {"text": "The death-rate by accidents among colliers is, at least, from six to seven times as great as the death-rate from violence among the whole population, including suicides homicides, and the dangerous occupations.", "example_title": "Example 2"}]}
npedrazzini/HistoroBERTa-SuicideIncidentClassifier
null
[ "transformers", "pytorch", "tf", "roberta", "text-classification", "roberta-based", "historical newspaper", "late modern english", "text classification", "not-for-all-audiences", "en", "dataset:npedrazzini/hist_suicide_incident", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T14:04:57+00:00
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ef_t5_baseline_testbest_model This model is a fine-tuned version of [google-t5/t5-base](https://huggingface.co/google-t5/t5-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004798427773610992 - train_batch_size: 4 - eval_batch_size: 4 - seed: 123 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.40.1 - Pytorch 2.1.2 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "google-t5/t5-base", "model-index": [{"name": "ef_t5_baseline_testbest_model", "results": []}]}
frayline/ef_t5_baseline_testbest_model
null
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T14:05:10+00:00
null
null
{}
andrealexroom/MultiARoomv0.0.0.1.5
null
[ "safetensors", "region:us" ]
null
2024-05-03T14:05:11+00:00
feature-extraction
transformers
# phospho-small This is a SetFit model that can be used for Text Classification on CPU. The model has been trained using an efficient few-shot learning technique. ## Usage ```python from setfit import SetFitModel model = SetFitModel.from_pretrained("phospho-small-74095b0") outputs = model.predict(["This is a sentence to classify", "Another sentence"]) # tensor([1, 0]) ``` ## References This work was possible thanks to the SetFit library and the work of: Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren (2022). Efficient Few-Shot Learning Without Prompts. ArXiv: [https://doi.org/10.48550/arxiv.2209.11055](https://doi.org/10.48550/arxiv.2209.11055)
{"language": "en", "license": "apache-2.0"}
phospho-app/phospho-small-74095b0
null
[ "transformers", "safetensors", "mpnet", "feature-extraction", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-03T14:06:31+00:00
null
null
{}
nicoboou/chadavit16_moyen
null
[ "region:us" ]
null
2024-05-03T14:06:35+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["unsloth"]}
OmnicromsBrain/Trainingtest7b
null
[ "transformers", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-03T14:06:56+00:00
null
null
{}
Cesco2004/TestV4.4
null
[ "region:us" ]
null
2024-05-03T14:07:18+00:00
null
transformers
{}
Mohamedshaaban2001/llama-3-8b-bnb-4bit-GGUF
null
[ "transformers", "gguf", "llama", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T14:07:57+00:00
text-generation
transformers
<img src="./llama-3-merges.webp" alt="Llama-3 DPO Logo" width="500" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4 This model is a fine-tune (DPO) of `meta-llama/Meta-Llama-3-70B-Instruct` model. # Quantized GGUF All GGUF models are available here: [MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4-GGUF) # Prompt Template This model uses `ChatML` prompt template: ``` <|im_start|>system {System} <|im_end|> <|im_start|>user {User} <|im_end|> <|im_start|>assistant {Assistant} ```` # How to use You can use this model by using `MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4` as the model name in Hugging Face's transformers library. ```python from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer from transformers import pipeline import torch model_id = "MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4" model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True, # attn_implementation="flash_attention_2" ) tokenizer = AutoTokenizer.from_pretrained( model_id, trust_remote_code=True ) streamer = TextStreamer(tokenizer) pipeline = pipeline( "text-generation", model=model, tokenizer=tokenizer, model_kwargs={"torch_dtype": torch.bfloat16}, streamer=streamer ) # Then you can use the pipeline to generate text. messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|im_end|>"), tokenizer.convert_tokens_to_ids("<|eot_id|>") # safer to have this too ] outputs = pipeline( prompt, max_new_tokens=2048, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.95, ) print(outputs[0]["generated_text"][len(prompt):]) ``` ## Sample Response ``` <|im_start|>system You are a helpful assistant.<|im_end|> <|im_start|>user describe about pros and cons of docker system.<|im_end|> <|im_start|>assistant ``` ``` Here are the pros and cons of the Docker system: **Pros:** 1. **Lightweight**: Docker containers are lightweight and use fewer resources compared to traditional virtual machines. 2. **Isolation**: Docker provides a high level of isolation between containers, ensuring that each application runs independently without interfering with others. 3. **Portability**: Docker containers are highly portable and can run on any system that supports Docker, without requiring specific dependencies or configurations. 4. **Efficient Use of Resources**: Docker containers can share the same kernel and use fewer resources, making them more efficient than traditional virtual machines. 5. **Fast Deployment**: Docker containers can be spun up quickly, making it ideal for rapid deployment and scaling of applications. 6. **Easy Maintenance**: Docker provides a simple way to manage and update applications, with features like rolling updates and rollbacks. 7. **Security**: Docker provides a secure environment for applications, with features like network isolation and encryption. 8. **Large Ecosystem**: Docker has a large and active ecosystem, with a wide range of tools and plugins available for various tasks. 9. **Version Control**: Docker provides an easy way to manage different versions of an application, with features like version tagging and rollbacks. 10. **Cost-Effective**: Docker can help reduce costs by minimizing the need for virtual machines and reducing the overhead of resource allocation. **Cons:** 1. **Steep Learning Curve**: Docker has a complex architecture and requires a good understanding of containerization, networking, and storage. 2. **Overhead**: While Docker containers are lightweight, they still require some overhead in terms of resources and management. 3. **Security Risks**: If not properly configured, Docker containers can introduce security risks, such as unauthorized access to the host system. 4. **Limited Support for GUI Applications**: Docker is primarily designed for running command-line applications, and running GUI applications can be challenging. 5. **Networking Complexity**: Docker's networking model can be complex, especially in large-scale deployments. 6. **Storage Complexity**: Docker's storage model can be complex, especially when dealing with persistent data. 7. **Limited Support for Legacy Applications**: Docker is designed for modern, cloud-native applications, and may not be suitable for legacy applications that rely on older technologies. 8. **Dependence on Docker Hub**: Docker relies heavily on Docker Hub, which can be a single point of failure if the hub is down or experiencing issues. 9. **Orchestration Complexity**: Docker provides a built-in orchestration tool, Docker Swarm, but it can be complex to set up and manage. 10. **Vendor Lock-in**: Docker is a proprietary technology, and while it has a large ecosystem, it can be difficult to switch to alternative containerization platforms. Overall, Docker provides a powerful and flexible way to deploy and manage applications, but it requires careful planning, configuration, and management to ensure optimal performance and security. ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__Llama-3-70B-Instruct-DPO-v0.4) | Metric |Value| |---------------------------------|----:| |Avg. |78.89| |AI2 Reasoning Challenge (25-Shot)|72.61| |HellaSwag (10-Shot) |86.03| |MMLU (5-Shot) |80.50| |TruthfulQA (0-shot) |63.26| |Winogrande (5-shot) |83.58| |GSM8k (5-shot) |87.34|
{"language": ["en"], "license": "llama3", "library_name": "transformers", "tags": ["axolotl", "finetune", "dpo", "facebook", "meta", "pytorch", "llama", "llama-3", "chatml"], "datasets": ["argilla/ultrafeedback-binarized-preferences"], "base_model": "meta-llama/Meta-Llama-3-70B-Instruct", "pipeline_tag": "text-generation", "license_name": "llama3", "license_link": "LICENSE", "inference": false, "model_creator": "MaziyarPanahi", "quantized_by": "MaziyarPanahi", "model-index": [{"name": "Llama-3-70B-Instruct-DPO-v0.4", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 72.61, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 86.03, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 80.5, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 63.26}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 83.58, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 87.34, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4", "name": "Open LLM Leaderboard"}}]}]}
blockblockblock/Llama-3-70B-Instruct-DPO-v0.4-bpw2.25-exl2
null
[ "transformers", "safetensors", "llama", "text-generation", "axolotl", "finetune", "dpo", "facebook", "meta", "pytorch", "llama-3", "chatml", "conversational", "en", "dataset:argilla/ultrafeedback-binarized-preferences", "base_model:meta-llama/Meta-Llama-3-70B-Instruct", "license:llama3", "model-index", "autotrain_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T14:08:04+00:00
null
null
{}
ivykopal/mlqa_es_adapter_100k
null
[ "region:us" ]
null
2024-05-03T14:08:10+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
OwOpeepeepoopoo/herewegoagain16
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T14:09:17+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
golf2248/07n1ce3
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T14:09:37+00:00
null
null
{"license": "other", "license_name": "1", "license_link": "LICENSE"}
Boowy/da
null
[ "license:other", "region:us" ]
null
2024-05-03T14:09:39+00:00
null
null
{}
Liv010/trashbin_test
null
[ "region:us" ]
null
2024-05-03T14:10:12+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
geniacllm/Mixtral-dMoE-8x2B
null
[ "transformers", "safetensors", "mixtral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T14:10:46+00:00
feature-extraction
sentence-transformers
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case: This model is designed to support various applications in natural language processing and understanding. ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from transformers import AutoModel, AutoTokenizer llm_name = "jina-embeddings-v2-base-en-03052024-21on-webapp" tokenizer = AutoTokenizer.from_pretrained(llm_name) model = AutoModel.from_pretrained(llm_name, trust_remote_code=True) tokens = tokenizer("Your text here", return_tensors="pt") embedding = model(**tokens) ```
{"language": ["en"], "license": "apache-2.0", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb", "Events", "Meetups", "Networking", "Community", "Social"], "datasets": ["fine-tuned/jina-embeddings-v2-base-en-03052024-21on-webapp", "allenai/c4"], "pipeline_tag": "feature-extraction"}
fine-tuned/jina-embeddings-v2-base-en-03052024-21on-webapp
null
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "Events", "Meetups", "Networking", "Community", "Social", "custom_code", "en", "dataset:fine-tuned/jina-embeddings-v2-base-en-03052024-21on-webapp", "dataset:allenai/c4", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-03T14:10:59+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # shawgpt-ft This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.7929 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 3.921 | 0.9892 | 69 | 3.1268 | | 2.7734 | 1.9928 | 139 | 2.7793 | | 2.5538 | 2.9964 | 209 | 2.7026 | | 2.4648 | 4.0 | 279 | 2.7008 | | 2.4164 | 4.9892 | 348 | 2.7113 | | 2.3266 | 5.9928 | 418 | 2.6972 | | 2.2489 | 6.9964 | 488 | 2.7195 | | 2.1813 | 8.0 | 558 | 2.7573 | | 2.2002 | 8.9892 | 627 | 2.7826 | | 2.0955 | 9.8925 | 690 | 2.7929 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.1.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "model-index": [{"name": "shawgpt-ft", "results": []}]}
jaki-1/shawgpt-ft
null
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "license:apache-2.0", "region:us" ]
null
2024-05-03T14:10:59+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"license": "apache-2.0", "library_name": "transformers", "basemodel": "Qwen/Qwen1.5-7B"}
YeungNLP/firefly-qwen1.5-en-7b-dpo-v0.1-unsloth
null
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T14:11:50+00:00
fill-mask
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Bert-MLM-fine-tuned-model This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1592 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 12 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 2168 | 1.2200 | | No log | 2.0 | 4336 | 1.1724 | | No log | 3.0 | 6504 | 1.1592 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "google-bert/bert-base-multilingual-cased", "model-index": [{"name": "Bert-MLM-fine-tuned-model", "results": []}]}
AmalNlal/Bert-MLM-fine-tuned-model
null
[ "transformers", "safetensors", "bert", "fill-mask", "generated_from_trainer", "base_model:google-bert/bert-base-multilingual-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T14:12:32+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ef_gpt_rad2bhc_testbest_model This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.967888048003899e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 123 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.40.1 - Pytorch 2.1.2 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "gpt2", "model-index": [{"name": "ef_gpt_rad2bhc_testbest_model", "results": []}]}
frayline/ef_gpt_rad2bhc_testbest_model
null
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:gpt2", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T14:12:43+00:00
null
null
{}
domenicrosati/mxbai-rerank-xsmall-v1-onnx-optimized
null
[ "region:us" ]
null
2024-05-03T14:12:55+00:00
text2text-generation
transformers
{}
ignacioct/args.run_name
null
[ "transformers", "safetensors", "mt5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T14:13:12+00:00
null
null
{}
macadeliccc/my_awesome_model
null
[ "region:us" ]
null
2024-05-03T14:13:40+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
OwOpeepeepoopoo/herewegoagain14c
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T14:13:49+00:00
null
null
{"license": "cc-by-nc-sa-4.0"}
JustineJ/OC_IMLP5
null
[ "license:cc-by-nc-sa-4.0", "region:us" ]
null
2024-05-03T14:14:33+00:00
feature-extraction
transformers
{}
MahmoudTaktak/LEGAL
null
[ "transformers", "pytorch", "bert", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2024-05-03T14:16:25+00:00
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
beingbatman/blip2-opt-2.7b-rad-report-mimic-cxr-d1-old
null
[ "transformers", "safetensors", "blip", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T14:18:43+00:00
null
null
{}
domenicrosati/domenicrosati
null
[ "region:us" ]
null
2024-05-03T14:18:52+00:00
token-classification
transformers
{}
pontusnorman123/layoutlmv3-finetuned-sweset3_wild751_combtest_v3
null
[ "transformers", "tensorboard", "safetensors", "layoutlmv3", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T14:19:05+00:00
text-generation
transformers
{}
AhmedTaha012/gptneo-TxtToJson-v0.3.0
null
[ "transformers", "pytorch", "tensorboard", "gpt_neo", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T14:20:43+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_0-seqsight_32768_512_43M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_mouse_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_0) dataset. It achieves the following results on the evaluation set: - Loss: 0.6113 - F1 Score: 0.7406 - Accuracy: 0.7407 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6355 | 3.92 | 200 | 0.5919 | 0.6649 | 0.6654 | | 0.5735 | 7.84 | 400 | 0.5729 | 0.7062 | 0.7062 | | 0.546 | 11.76 | 600 | 0.5586 | 0.7217 | 0.7259 | | 0.5193 | 15.69 | 800 | 0.5655 | 0.7173 | 0.7247 | | 0.504 | 19.61 | 1000 | 0.5275 | 0.7514 | 0.7519 | | 0.4849 | 23.53 | 1200 | 0.5448 | 0.7243 | 0.7296 | | 0.4657 | 27.45 | 1400 | 0.5294 | 0.7481 | 0.7481 | | 0.4528 | 31.37 | 1600 | 0.5494 | 0.7530 | 0.7531 | | 0.4421 | 35.29 | 1800 | 0.5300 | 0.7654 | 0.7654 | | 0.4255 | 39.22 | 2000 | 0.5357 | 0.7555 | 0.7556 | | 0.4131 | 43.14 | 2200 | 0.5389 | 0.7581 | 0.7593 | | 0.407 | 47.06 | 2400 | 0.5433 | 0.7637 | 0.7642 | | 0.3896 | 50.98 | 2600 | 0.5581 | 0.7765 | 0.7765 | | 0.3812 | 54.9 | 2800 | 0.5430 | 0.7636 | 0.7642 | | 0.3687 | 58.82 | 3000 | 0.5724 | 0.7703 | 0.7704 | | 0.3572 | 62.75 | 3200 | 0.5860 | 0.7740 | 0.7741 | | 0.3475 | 66.67 | 3400 | 0.5887 | 0.7581 | 0.7580 | | 0.3384 | 70.59 | 3600 | 0.6279 | 0.7642 | 0.7642 | | 0.3251 | 74.51 | 3800 | 0.6395 | 0.7624 | 0.7642 | | 0.3229 | 78.43 | 4000 | 0.6281 | 0.7729 | 0.7728 | | 0.3096 | 82.35 | 4200 | 0.6224 | 0.7703 | 0.7704 | | 0.3001 | 86.27 | 4400 | 0.6456 | 0.7637 | 0.7642 | | 0.292 | 90.2 | 4600 | 0.6421 | 0.7580 | 0.7580 | | 0.2874 | 94.12 | 4800 | 0.6674 | 0.7740 | 0.7741 | | 0.2784 | 98.04 | 5000 | 0.6710 | 0.7712 | 0.7716 | | 0.2713 | 101.96 | 5200 | 0.6843 | 0.7655 | 0.7654 | | 0.2639 | 105.88 | 5400 | 0.7009 | 0.7642 | 0.7642 | | 0.2593 | 109.8 | 5600 | 0.7156 | 0.7567 | 0.7568 | | 0.2495 | 113.73 | 5800 | 0.6869 | 0.7713 | 0.7716 | | 0.2462 | 117.65 | 6000 | 0.7264 | 0.7642 | 0.7642 | | 0.2409 | 121.57 | 6200 | 0.7550 | 0.7580 | 0.7580 | | 0.2326 | 125.49 | 6400 | 0.7553 | 0.7507 | 0.7506 | | 0.2311 | 129.41 | 6600 | 0.7816 | 0.7630 | 0.7630 | | 0.2269 | 133.33 | 6800 | 0.7690 | 0.7553 | 0.7556 | | 0.2275 | 137.25 | 7000 | 0.7599 | 0.7531 | 0.7531 | | 0.2204 | 141.18 | 7200 | 0.7752 | 0.7617 | 0.7617 | | 0.2155 | 145.1 | 7400 | 0.8115 | 0.7580 | 0.7580 | | 0.2138 | 149.02 | 7600 | 0.7925 | 0.7469 | 0.7469 | | 0.2167 | 152.94 | 7800 | 0.7839 | 0.7506 | 0.7506 | | 0.2052 | 156.86 | 8000 | 0.8067 | 0.7605 | 0.7605 | | 0.2061 | 160.78 | 8200 | 0.8162 | 0.7593 | 0.7593 | | 0.2059 | 164.71 | 8400 | 0.8187 | 0.7592 | 0.7593 | | 0.2026 | 168.63 | 8600 | 0.8059 | 0.7556 | 0.7556 | | 0.2012 | 172.55 | 8800 | 0.8118 | 0.7605 | 0.7605 | | 0.2027 | 176.47 | 9000 | 0.8080 | 0.7580 | 0.7580 | | 0.1975 | 180.39 | 9200 | 0.8110 | 0.7593 | 0.7593 | | 0.1955 | 184.31 | 9400 | 0.8232 | 0.7593 | 0.7593 | | 0.1953 | 188.24 | 9600 | 0.8376 | 0.7580 | 0.7580 | | 0.1922 | 192.16 | 9800 | 0.8251 | 0.7605 | 0.7605 | | 0.1944 | 196.08 | 10000 | 0.8256 | 0.7593 | 0.7593 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_mouse_0-seqsight_32768_512_43M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_0-seqsight_32768_512_43M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_32768_512_43M", "region:us" ]
null
2024-05-03T14:22:26+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_1-seqsight_32768_512_43M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_mouse_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_1) dataset. It achieves the following results on the evaluation set: - Loss: 0.2304 - F1 Score: 0.9014 - Accuracy: 0.9014 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.4439 | 0.47 | 200 | 0.3142 | 0.8587 | 0.8587 | | 0.3305 | 0.95 | 400 | 0.3015 | 0.8683 | 0.8683 | | 0.2983 | 1.42 | 600 | 0.2712 | 0.8774 | 0.8775 | | 0.3006 | 1.9 | 800 | 0.2651 | 0.8809 | 0.8811 | | 0.2827 | 2.37 | 1000 | 0.2585 | 0.8853 | 0.8854 | | 0.2754 | 2.84 | 1200 | 0.2558 | 0.8874 | 0.8875 | | 0.2592 | 3.32 | 1400 | 0.2508 | 0.8922 | 0.8922 | | 0.2625 | 3.79 | 1600 | 0.2588 | 0.8916 | 0.8916 | | 0.2587 | 4.27 | 1800 | 0.2435 | 0.8938 | 0.8940 | | 0.2556 | 4.74 | 2000 | 0.2602 | 0.8913 | 0.8913 | | 0.2508 | 5.21 | 2200 | 0.2437 | 0.8945 | 0.8947 | | 0.2505 | 5.69 | 2400 | 0.2474 | 0.8967 | 0.8967 | | 0.2447 | 6.16 | 2600 | 0.2382 | 0.8970 | 0.8971 | | 0.2417 | 6.64 | 2800 | 0.2396 | 0.8973 | 0.8974 | | 0.2418 | 7.11 | 3000 | 0.2443 | 0.8972 | 0.8973 | | 0.2368 | 7.58 | 3200 | 0.2445 | 0.8996 | 0.8996 | | 0.2414 | 8.06 | 3400 | 0.2315 | 0.9021 | 0.9021 | | 0.2334 | 8.53 | 3600 | 0.2428 | 0.8978 | 0.8979 | | 0.2387 | 9.0 | 3800 | 0.2370 | 0.9017 | 0.9017 | | 0.2316 | 9.48 | 4000 | 0.2319 | 0.9041 | 0.9042 | | 0.2371 | 9.95 | 4200 | 0.2291 | 0.9037 | 0.9038 | | 0.2291 | 10.43 | 4400 | 0.2345 | 0.9032 | 0.9032 | | 0.2284 | 10.9 | 4600 | 0.2371 | 0.9023 | 0.9023 | | 0.2261 | 11.37 | 4800 | 0.2330 | 0.9030 | 0.9030 | | 0.235 | 11.85 | 5000 | 0.2351 | 0.9021 | 0.9021 | | 0.2269 | 12.32 | 5200 | 0.2345 | 0.9032 | 0.9032 | | 0.2278 | 12.8 | 5400 | 0.2447 | 0.9010 | 0.9010 | | 0.228 | 13.27 | 5600 | 0.2277 | 0.9033 | 0.9033 | | 0.2227 | 13.74 | 5800 | 0.2296 | 0.9036 | 0.9036 | | 0.2272 | 14.22 | 6000 | 0.2287 | 0.9048 | 0.9048 | | 0.2201 | 14.69 | 6200 | 0.2288 | 0.9040 | 0.9041 | | 0.2269 | 15.17 | 6400 | 0.2417 | 0.9002 | 0.9002 | | 0.226 | 15.64 | 6600 | 0.2310 | 0.9032 | 0.9032 | | 0.2209 | 16.11 | 6800 | 0.2297 | 0.9020 | 0.9021 | | 0.2165 | 16.59 | 7000 | 0.2274 | 0.9058 | 0.9059 | | 0.2246 | 17.06 | 7200 | 0.2272 | 0.9064 | 0.9064 | | 0.2164 | 17.54 | 7400 | 0.2329 | 0.9039 | 0.9039 | | 0.2211 | 18.01 | 7600 | 0.2240 | 0.9071 | 0.9072 | | 0.216 | 18.48 | 7800 | 0.2279 | 0.9050 | 0.9050 | | 0.2209 | 18.96 | 8000 | 0.2276 | 0.9048 | 0.9048 | | 0.2183 | 19.43 | 8200 | 0.2277 | 0.9047 | 0.9047 | | 0.2177 | 19.91 | 8400 | 0.2284 | 0.9034 | 0.9035 | | 0.2193 | 20.38 | 8600 | 0.2263 | 0.9055 | 0.9056 | | 0.2162 | 20.85 | 8800 | 0.2274 | 0.9040 | 0.9041 | | 0.214 | 21.33 | 9000 | 0.2295 | 0.9051 | 0.9051 | | 0.2134 | 21.8 | 9200 | 0.2289 | 0.9048 | 0.9048 | | 0.218 | 22.27 | 9400 | 0.2275 | 0.9047 | 0.9047 | | 0.2141 | 22.75 | 9600 | 0.2286 | 0.9058 | 0.9059 | | 0.2154 | 23.22 | 9800 | 0.2281 | 0.9057 | 0.9057 | | 0.2182 | 23.7 | 10000 | 0.2273 | 0.9057 | 0.9057 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_mouse_1-seqsight_32768_512_43M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_1-seqsight_32768_512_43M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_32768_512_43M", "region:us" ]
null
2024-05-03T14:22:26+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_1-seqsight_32768_512_43M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_mouse_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_1) dataset. It achieves the following results on the evaluation set: - Loss: 0.2321 - F1 Score: 0.9021 - Accuracy: 0.9021 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.4146 | 0.47 | 200 | 0.2979 | 0.8655 | 0.8655 | | 0.3089 | 0.95 | 400 | 0.2727 | 0.8817 | 0.8817 | | 0.2776 | 1.42 | 600 | 0.2494 | 0.8899 | 0.8900 | | 0.2737 | 1.9 | 800 | 0.2426 | 0.8961 | 0.8962 | | 0.2594 | 2.37 | 1000 | 0.2409 | 0.8956 | 0.8956 | | 0.2513 | 2.84 | 1200 | 0.2410 | 0.8947 | 0.8947 | | 0.2397 | 3.32 | 1400 | 0.2422 | 0.8981 | 0.8981 | | 0.2449 | 3.79 | 1600 | 0.2384 | 0.8989 | 0.8989 | | 0.2416 | 4.27 | 1800 | 0.2324 | 0.9037 | 0.9038 | | 0.2398 | 4.74 | 2000 | 0.2481 | 0.8972 | 0.8973 | | 0.2349 | 5.21 | 2200 | 0.2360 | 0.8999 | 0.9001 | | 0.2335 | 5.69 | 2400 | 0.2385 | 0.8987 | 0.8987 | | 0.2281 | 6.16 | 2600 | 0.2312 | 0.9029 | 0.9030 | | 0.2258 | 6.64 | 2800 | 0.2312 | 0.8998 | 0.8999 | | 0.2247 | 7.11 | 3000 | 0.2352 | 0.9019 | 0.9020 | | 0.2201 | 7.58 | 3200 | 0.2347 | 0.9020 | 0.9020 | | 0.2255 | 8.06 | 3400 | 0.2230 | 0.9054 | 0.9056 | | 0.215 | 8.53 | 3600 | 0.2267 | 0.9033 | 0.9033 | | 0.2223 | 9.0 | 3800 | 0.2280 | 0.9046 | 0.9047 | | 0.2128 | 9.48 | 4000 | 0.2226 | 0.9096 | 0.9097 | | 0.218 | 9.95 | 4200 | 0.2198 | 0.9067 | 0.9067 | | 0.2087 | 10.43 | 4400 | 0.2301 | 0.9075 | 0.9075 | | 0.2088 | 10.9 | 4600 | 0.2277 | 0.9072 | 0.9072 | | 0.2071 | 11.37 | 4800 | 0.2231 | 0.9075 | 0.9075 | | 0.2124 | 11.85 | 5000 | 0.2289 | 0.9043 | 0.9044 | | 0.2061 | 12.32 | 5200 | 0.2254 | 0.9048 | 0.9048 | | 0.2073 | 12.8 | 5400 | 0.2354 | 0.9039 | 0.9039 | | 0.2032 | 13.27 | 5600 | 0.2220 | 0.9086 | 0.9087 | | 0.2013 | 13.74 | 5800 | 0.2282 | 0.9054 | 0.9054 | | 0.2061 | 14.22 | 6000 | 0.2203 | 0.9085 | 0.9085 | | 0.1977 | 14.69 | 6200 | 0.2282 | 0.9085 | 0.9085 | | 0.2024 | 15.17 | 6400 | 0.2332 | 0.9063 | 0.9063 | | 0.2018 | 15.64 | 6600 | 0.2248 | 0.9077 | 0.9078 | | 0.1963 | 16.11 | 6800 | 0.2276 | 0.9043 | 0.9044 | | 0.1912 | 16.59 | 7000 | 0.2229 | 0.9100 | 0.9100 | | 0.2007 | 17.06 | 7200 | 0.2215 | 0.9106 | 0.9106 | | 0.1888 | 17.54 | 7400 | 0.2275 | 0.9097 | 0.9097 | | 0.1976 | 18.01 | 7600 | 0.2204 | 0.9104 | 0.9105 | | 0.1896 | 18.48 | 7800 | 0.2251 | 0.9113 | 0.9113 | | 0.1934 | 18.96 | 8000 | 0.2200 | 0.9088 | 0.9088 | | 0.1926 | 19.43 | 8200 | 0.2247 | 0.9075 | 0.9075 | | 0.1892 | 19.91 | 8400 | 0.2316 | 0.9062 | 0.9063 | | 0.1916 | 20.38 | 8600 | 0.2280 | 0.9072 | 0.9072 | | 0.1894 | 20.85 | 8800 | 0.2277 | 0.9065 | 0.9066 | | 0.1864 | 21.33 | 9000 | 0.2292 | 0.9082 | 0.9082 | | 0.185 | 21.8 | 9200 | 0.2275 | 0.9095 | 0.9096 | | 0.1918 | 22.27 | 9400 | 0.2245 | 0.9081 | 0.9081 | | 0.1856 | 22.75 | 9600 | 0.2276 | 0.9074 | 0.9075 | | 0.1853 | 23.22 | 9800 | 0.2281 | 0.9086 | 0.9087 | | 0.1874 | 23.7 | 10000 | 0.2269 | 0.9091 | 0.9091 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_mouse_1-seqsight_32768_512_43M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_1-seqsight_32768_512_43M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_32768_512_43M", "region:us" ]
null
2024-05-03T14:22:26+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K4me2-seqsight_16384_512_34M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_34M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_34M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me2) dataset. It achieves the following results on the evaluation set: - Loss: 0.5985 - F1 Score: 0.6801 - Accuracy: 0.6790 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6491 | 1.04 | 200 | 0.6203 | 0.6358 | 0.6595 | | 0.6139 | 2.08 | 400 | 0.6525 | 0.6229 | 0.6227 | | 0.6056 | 3.12 | 600 | 0.6067 | 0.6758 | 0.6761 | | 0.5978 | 4.17 | 800 | 0.6232 | 0.6649 | 0.6624 | | 0.5883 | 5.21 | 1000 | 0.6098 | 0.6699 | 0.6680 | | 0.5829 | 6.25 | 1200 | 0.6085 | 0.6710 | 0.6689 | | 0.5722 | 7.29 | 1400 | 0.5984 | 0.6777 | 0.6764 | | 0.5685 | 8.33 | 1600 | 0.6231 | 0.6688 | 0.6663 | | 0.5613 | 9.38 | 1800 | 0.6276 | 0.6632 | 0.6608 | | 0.5524 | 10.42 | 2000 | 0.6044 | 0.6773 | 0.6768 | | 0.5508 | 11.46 | 2200 | 0.6039 | 0.6693 | 0.6712 | | 0.5345 | 12.5 | 2400 | 0.6154 | 0.6766 | 0.6761 | | 0.5272 | 13.54 | 2600 | 0.6787 | 0.6570 | 0.6546 | | 0.5231 | 14.58 | 2800 | 0.6346 | 0.6727 | 0.6716 | | 0.5208 | 15.62 | 3000 | 0.6651 | 0.6665 | 0.6641 | | 0.5072 | 16.67 | 3200 | 0.6459 | 0.6751 | 0.6735 | | 0.4981 | 17.71 | 3400 | 0.7131 | 0.6359 | 0.6351 | | 0.4907 | 18.75 | 3600 | 0.6785 | 0.6630 | 0.6605 | | 0.4875 | 19.79 | 3800 | 0.6916 | 0.6662 | 0.6637 | | 0.475 | 20.83 | 4000 | 0.6827 | 0.6731 | 0.6709 | | 0.4781 | 21.88 | 4200 | 0.7175 | 0.6698 | 0.6673 | | 0.4623 | 22.92 | 4400 | 0.7095 | 0.6615 | 0.6592 | | 0.4578 | 23.96 | 4600 | 0.7411 | 0.6587 | 0.6572 | | 0.4513 | 25.0 | 4800 | 0.7685 | 0.6460 | 0.6455 | | 0.4443 | 26.04 | 5000 | 0.7568 | 0.6595 | 0.6572 | | 0.4349 | 27.08 | 5200 | 0.7462 | 0.6687 | 0.6663 | | 0.4334 | 28.12 | 5400 | 0.7394 | 0.6627 | 0.6601 | | 0.4244 | 29.17 | 5600 | 0.7322 | 0.6685 | 0.6660 | | 0.4155 | 30.21 | 5800 | 0.8332 | 0.6513 | 0.6500 | | 0.4177 | 31.25 | 6000 | 0.7752 | 0.6671 | 0.6647 | | 0.4104 | 32.29 | 6200 | 0.7569 | 0.6699 | 0.6676 | | 0.4023 | 33.33 | 6400 | 0.7934 | 0.6654 | 0.6631 | | 0.3926 | 34.38 | 6600 | 0.7863 | 0.6650 | 0.6624 | | 0.3932 | 35.42 | 6800 | 0.8033 | 0.6585 | 0.6566 | | 0.3864 | 36.46 | 7000 | 0.8465 | 0.6496 | 0.6487 | | 0.3869 | 37.5 | 7200 | 0.7903 | 0.6705 | 0.6680 | | 0.3732 | 38.54 | 7400 | 0.8169 | 0.6601 | 0.6575 | | 0.3763 | 39.58 | 7600 | 0.8006 | 0.6720 | 0.6696 | | 0.376 | 40.62 | 7800 | 0.8370 | 0.6606 | 0.6582 | | 0.3687 | 41.67 | 8000 | 0.8467 | 0.6533 | 0.6514 | | 0.3625 | 42.71 | 8200 | 0.8433 | 0.6672 | 0.6647 | | 0.3657 | 43.75 | 8400 | 0.8259 | 0.6627 | 0.6601 | | 0.3587 | 44.79 | 8600 | 0.8573 | 0.6622 | 0.6598 | | 0.3544 | 45.83 | 8800 | 0.8594 | 0.6674 | 0.6654 | | 0.3594 | 46.88 | 9000 | 0.8783 | 0.6523 | 0.6507 | | 0.3547 | 47.92 | 9200 | 0.8590 | 0.6667 | 0.6644 | | 0.3543 | 48.96 | 9400 | 0.8408 | 0.6653 | 0.6628 | | 0.3477 | 50.0 | 9600 | 0.8791 | 0.6603 | 0.6582 | | 0.3496 | 51.04 | 9800 | 0.8755 | 0.6626 | 0.6605 | | 0.3474 | 52.08 | 10000 | 0.8741 | 0.6643 | 0.6621 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_34M", "model-index": [{"name": "GUE_EMP_H3K4me2-seqsight_16384_512_34M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K4me2-seqsight_16384_512_34M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_34M", "region:us" ]
null
2024-05-03T14:22:26+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_1-seqsight_32768_512_43M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_mouse_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_1) dataset. It achieves the following results on the evaluation set: - Loss: 0.2537 - F1 Score: 0.8863 - Accuracy: 0.8863 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.4999 | 0.47 | 200 | 0.3566 | 0.8414 | 0.8415 | | 0.3821 | 0.95 | 400 | 0.3269 | 0.8540 | 0.8540 | | 0.3435 | 1.42 | 600 | 0.3149 | 0.8621 | 0.8621 | | 0.3439 | 1.9 | 800 | 0.2987 | 0.8657 | 0.8658 | | 0.3217 | 2.37 | 1000 | 0.2985 | 0.8696 | 0.8697 | | 0.3167 | 2.84 | 1200 | 0.2899 | 0.8747 | 0.8747 | | 0.3008 | 3.32 | 1400 | 0.2855 | 0.8753 | 0.8753 | | 0.3047 | 3.79 | 1600 | 0.2820 | 0.8743 | 0.8743 | | 0.3011 | 4.27 | 1800 | 0.2769 | 0.8753 | 0.8756 | | 0.2974 | 4.74 | 2000 | 0.2785 | 0.8766 | 0.8766 | | 0.2889 | 5.21 | 2200 | 0.2745 | 0.8787 | 0.8789 | | 0.2904 | 5.69 | 2400 | 0.2695 | 0.8811 | 0.8811 | | 0.2861 | 6.16 | 2600 | 0.2652 | 0.8832 | 0.8833 | | 0.2802 | 6.64 | 2800 | 0.2637 | 0.8836 | 0.8836 | | 0.2799 | 7.11 | 3000 | 0.2654 | 0.8854 | 0.8854 | | 0.2747 | 7.58 | 3200 | 0.2661 | 0.8826 | 0.8826 | | 0.2781 | 8.06 | 3400 | 0.2605 | 0.8858 | 0.8858 | | 0.2735 | 8.53 | 3600 | 0.2614 | 0.8866 | 0.8866 | | 0.2769 | 9.0 | 3800 | 0.2575 | 0.8857 | 0.8857 | | 0.2701 | 9.48 | 4000 | 0.2636 | 0.8861 | 0.8863 | | 0.2722 | 9.95 | 4200 | 0.2549 | 0.8870 | 0.8870 | | 0.2673 | 10.43 | 4400 | 0.2560 | 0.8879 | 0.8879 | | 0.2675 | 10.9 | 4600 | 0.2562 | 0.8910 | 0.8910 | | 0.263 | 11.37 | 4800 | 0.2539 | 0.8915 | 0.8915 | | 0.2721 | 11.85 | 5000 | 0.2533 | 0.8898 | 0.8898 | | 0.2635 | 12.32 | 5200 | 0.2581 | 0.8909 | 0.8909 | | 0.266 | 12.8 | 5400 | 0.2528 | 0.8918 | 0.8918 | | 0.2653 | 13.27 | 5600 | 0.2513 | 0.8908 | 0.8909 | | 0.2598 | 13.74 | 5800 | 0.2509 | 0.8925 | 0.8925 | | 0.2636 | 14.22 | 6000 | 0.2504 | 0.8910 | 0.8910 | | 0.2583 | 14.69 | 6200 | 0.2515 | 0.8922 | 0.8922 | | 0.2655 | 15.17 | 6400 | 0.2550 | 0.8919 | 0.8919 | | 0.2624 | 15.64 | 6600 | 0.2504 | 0.8929 | 0.8930 | | 0.2599 | 16.11 | 6800 | 0.2479 | 0.8927 | 0.8928 | | 0.2571 | 16.59 | 7000 | 0.2486 | 0.8953 | 0.8953 | | 0.2601 | 17.06 | 7200 | 0.2470 | 0.8949 | 0.8949 | | 0.2594 | 17.54 | 7400 | 0.2496 | 0.8941 | 0.8941 | | 0.2594 | 18.01 | 7600 | 0.2467 | 0.8950 | 0.8950 | | 0.2571 | 18.48 | 7800 | 0.2490 | 0.8941 | 0.8941 | | 0.2604 | 18.96 | 8000 | 0.2473 | 0.8941 | 0.8941 | | 0.2565 | 19.43 | 8200 | 0.2475 | 0.8965 | 0.8965 | | 0.2572 | 19.91 | 8400 | 0.2460 | 0.8936 | 0.8937 | | 0.257 | 20.38 | 8600 | 0.2465 | 0.8970 | 0.8970 | | 0.2561 | 20.85 | 8800 | 0.2463 | 0.8956 | 0.8956 | | 0.2526 | 21.33 | 9000 | 0.2475 | 0.8964 | 0.8964 | | 0.2531 | 21.8 | 9200 | 0.2478 | 0.8964 | 0.8964 | | 0.2591 | 22.27 | 9400 | 0.2469 | 0.8959 | 0.8959 | | 0.254 | 22.75 | 9600 | 0.2462 | 0.8963 | 0.8964 | | 0.2574 | 23.22 | 9800 | 0.2465 | 0.8959 | 0.8959 | | 0.2575 | 23.7 | 10000 | 0.2464 | 0.8965 | 0.8965 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_mouse_1-seqsight_32768_512_43M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_1-seqsight_32768_512_43M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_32768_512_43M", "region:us" ]
null
2024-05-03T14:22:26+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_4-seqsight_32768_512_43M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_mouse_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_4) dataset. It achieves the following results on the evaluation set: - Loss: 0.5964 - F1 Score: 0.6939 - Accuracy: 0.6941 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6317 | 1.69 | 200 | 0.6029 | 0.6769 | 0.6782 | | 0.5874 | 3.39 | 400 | 0.5773 | 0.6854 | 0.6883 | | 0.5618 | 5.08 | 600 | 0.5587 | 0.7217 | 0.7217 | | 0.546 | 6.78 | 800 | 0.5472 | 0.7191 | 0.7191 | | 0.5315 | 8.47 | 1000 | 0.5645 | 0.7062 | 0.7079 | | 0.5124 | 10.17 | 1200 | 0.5576 | 0.7169 | 0.7169 | | 0.5043 | 11.86 | 1400 | 0.5609 | 0.7185 | 0.7185 | | 0.484 | 13.56 | 1600 | 0.5663 | 0.7211 | 0.7212 | | 0.4742 | 15.25 | 1800 | 0.5853 | 0.7216 | 0.7217 | | 0.4602 | 16.95 | 2000 | 0.5788 | 0.7080 | 0.7100 | | 0.4476 | 18.64 | 2200 | 0.5648 | 0.7223 | 0.7223 | | 0.43 | 20.34 | 2400 | 0.6184 | 0.7188 | 0.7191 | | 0.4208 | 22.03 | 2600 | 0.6134 | 0.7186 | 0.7185 | | 0.4023 | 23.73 | 2800 | 0.6485 | 0.7131 | 0.7132 | | 0.3945 | 25.42 | 3000 | 0.6664 | 0.7175 | 0.7191 | | 0.3858 | 27.12 | 3200 | 0.6884 | 0.7117 | 0.7116 | | 0.3721 | 28.81 | 3400 | 0.6782 | 0.7178 | 0.7180 | | 0.3564 | 30.51 | 3600 | 0.7114 | 0.7149 | 0.7148 | | 0.3545 | 32.2 | 3800 | 0.7071 | 0.7078 | 0.7084 | | 0.3425 | 33.9 | 4000 | 0.7339 | 0.7028 | 0.7037 | | 0.3318 | 35.59 | 4200 | 0.7366 | 0.7064 | 0.7063 | | 0.3185 | 37.29 | 4400 | 0.7456 | 0.7069 | 0.7069 | | 0.3079 | 38.98 | 4600 | 0.7941 | 0.6991 | 0.6994 | | 0.3018 | 40.68 | 4800 | 0.7694 | 0.7078 | 0.7079 | | 0.2917 | 42.37 | 5000 | 0.8322 | 0.7068 | 0.7069 | | 0.2878 | 44.07 | 5200 | 0.8276 | 0.7047 | 0.7047 | | 0.2823 | 45.76 | 5400 | 0.8373 | 0.7067 | 0.7069 | | 0.269 | 47.46 | 5600 | 0.8612 | 0.7080 | 0.7079 | | 0.2698 | 49.15 | 5800 | 0.8352 | 0.7026 | 0.7026 | | 0.2556 | 50.85 | 6000 | 0.8812 | 0.7069 | 0.7069 | | 0.2476 | 52.54 | 6200 | 0.9216 | 0.7036 | 0.7042 | | 0.2524 | 54.24 | 6400 | 0.8557 | 0.7091 | 0.7090 | | 0.2501 | 55.93 | 6600 | 0.9083 | 0.7051 | 0.7053 | | 0.2378 | 57.63 | 6800 | 0.9140 | 0.7107 | 0.7106 | | 0.2333 | 59.32 | 7000 | 0.9598 | 0.7017 | 0.7015 | | 0.2363 | 61.02 | 7200 | 0.8962 | 0.7042 | 0.7042 | | 0.2248 | 62.71 | 7400 | 0.9316 | 0.6979 | 0.6978 | | 0.225 | 64.41 | 7600 | 0.9546 | 0.7116 | 0.7116 | | 0.2202 | 66.1 | 7800 | 0.9617 | 0.7048 | 0.7047 | | 0.2195 | 67.8 | 8000 | 0.9515 | 0.7048 | 0.7047 | | 0.2144 | 69.49 | 8200 | 0.9789 | 0.6995 | 0.6994 | | 0.2103 | 71.19 | 8400 | 0.9751 | 0.7053 | 0.7053 | | 0.2104 | 72.88 | 8600 | 0.9788 | 0.7080 | 0.7084 | | 0.2051 | 74.58 | 8800 | 0.9890 | 0.7080 | 0.7079 | | 0.2035 | 76.27 | 9000 | 0.9850 | 0.7080 | 0.7079 | | 0.2051 | 77.97 | 9200 | 0.9909 | 0.7026 | 0.7026 | | 0.1999 | 79.66 | 9400 | 1.0087 | 0.7080 | 0.7079 | | 0.1925 | 81.36 | 9600 | 1.0246 | 0.7059 | 0.7058 | | 0.1992 | 83.05 | 9800 | 1.0152 | 0.7058 | 0.7058 | | 0.1987 | 84.75 | 10000 | 1.0124 | 0.7059 | 0.7058 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_mouse_4-seqsight_32768_512_43M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_4-seqsight_32768_512_43M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_32768_512_43M", "region:us" ]
null
2024-05-03T14:22:28+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_4-seqsight_32768_512_43M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_mouse_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_4) dataset. It achieves the following results on the evaluation set: - Loss: 0.5855 - F1 Score: 0.6982 - Accuracy: 0.6984 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6573 | 1.69 | 200 | 0.6152 | 0.6667 | 0.6670 | | 0.617 | 3.39 | 400 | 0.6072 | 0.6717 | 0.6734 | | 0.6032 | 5.08 | 600 | 0.5950 | 0.6811 | 0.6814 | | 0.5952 | 6.78 | 800 | 0.5895 | 0.6790 | 0.6798 | | 0.5877 | 8.47 | 1000 | 0.5817 | 0.6888 | 0.6888 | | 0.5811 | 10.17 | 1200 | 0.5810 | 0.6975 | 0.6978 | | 0.5765 | 11.86 | 1400 | 0.5763 | 0.6979 | 0.6978 | | 0.5709 | 13.56 | 1600 | 0.5743 | 0.6985 | 0.6984 | | 0.569 | 15.25 | 1800 | 0.5730 | 0.7000 | 0.6999 | | 0.5642 | 16.95 | 2000 | 0.5742 | 0.6965 | 0.6989 | | 0.5603 | 18.64 | 2200 | 0.5668 | 0.7050 | 0.7058 | | 0.5573 | 20.34 | 2400 | 0.5644 | 0.7086 | 0.7084 | | 0.557 | 22.03 | 2600 | 0.5865 | 0.6860 | 0.6920 | | 0.5545 | 23.73 | 2800 | 0.5598 | 0.7162 | 0.7164 | | 0.5478 | 25.42 | 3000 | 0.5631 | 0.7170 | 0.7169 | | 0.549 | 27.12 | 3200 | 0.5689 | 0.7078 | 0.7100 | | 0.5459 | 28.81 | 3400 | 0.5574 | 0.7230 | 0.7233 | | 0.5431 | 30.51 | 3600 | 0.5561 | 0.7260 | 0.7260 | | 0.5424 | 32.2 | 3800 | 0.5564 | 0.7277 | 0.7281 | | 0.5412 | 33.9 | 4000 | 0.5610 | 0.7200 | 0.7212 | | 0.536 | 35.59 | 4200 | 0.5696 | 0.7193 | 0.7212 | | 0.5323 | 37.29 | 4400 | 0.5608 | 0.7275 | 0.7281 | | 0.5336 | 38.98 | 4600 | 0.5550 | 0.7223 | 0.7223 | | 0.5308 | 40.68 | 4800 | 0.5746 | 0.7186 | 0.7212 | | 0.5289 | 42.37 | 5000 | 0.5629 | 0.7270 | 0.7276 | | 0.531 | 44.07 | 5200 | 0.5628 | 0.7209 | 0.7217 | | 0.5281 | 45.76 | 5400 | 0.5583 | 0.7216 | 0.7223 | | 0.524 | 47.46 | 5600 | 0.5593 | 0.7274 | 0.7276 | | 0.5263 | 49.15 | 5800 | 0.5593 | 0.7210 | 0.7223 | | 0.5239 | 50.85 | 6000 | 0.5602 | 0.7201 | 0.7212 | | 0.5232 | 52.54 | 6200 | 0.5573 | 0.7313 | 0.7313 | | 0.5219 | 54.24 | 6400 | 0.5546 | 0.7223 | 0.7228 | | 0.5235 | 55.93 | 6600 | 0.5543 | 0.7246 | 0.7249 | | 0.5205 | 57.63 | 6800 | 0.5516 | 0.7265 | 0.7265 | | 0.5165 | 59.32 | 7000 | 0.5595 | 0.7258 | 0.7265 | | 0.5208 | 61.02 | 7200 | 0.5550 | 0.7273 | 0.7276 | | 0.5159 | 62.71 | 7400 | 0.5577 | 0.7278 | 0.7281 | | 0.519 | 64.41 | 7600 | 0.5556 | 0.7250 | 0.7254 | | 0.519 | 66.1 | 7800 | 0.5561 | 0.7229 | 0.7233 | | 0.516 | 67.8 | 8000 | 0.5546 | 0.7258 | 0.7260 | | 0.5173 | 69.49 | 8200 | 0.5528 | 0.7294 | 0.7297 | | 0.5151 | 71.19 | 8400 | 0.5541 | 0.7278 | 0.7281 | | 0.5159 | 72.88 | 8600 | 0.5528 | 0.7275 | 0.7276 | | 0.5113 | 74.58 | 8800 | 0.5565 | 0.7266 | 0.7270 | | 0.5141 | 76.27 | 9000 | 0.5574 | 0.7265 | 0.7270 | | 0.5157 | 77.97 | 9200 | 0.5563 | 0.7265 | 0.7270 | | 0.5129 | 79.66 | 9400 | 0.5548 | 0.7258 | 0.7260 | | 0.5146 | 81.36 | 9600 | 0.5545 | 0.7274 | 0.7276 | | 0.516 | 83.05 | 9800 | 0.5547 | 0.7267 | 0.7270 | | 0.5146 | 84.75 | 10000 | 0.5553 | 0.7277 | 0.7281 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_mouse_4-seqsight_32768_512_43M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_4-seqsight_32768_512_43M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_32768_512_43M", "region:us" ]
null
2024-05-03T14:22:28+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_0-seqsight_32768_512_43M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_mouse_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_0) dataset. It achieves the following results on the evaluation set: - Loss: 0.6723 - F1 Score: 0.7295 - Accuracy: 0.7296 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6203 | 3.92 | 200 | 0.5787 | 0.6882 | 0.6901 | | 0.5482 | 7.84 | 400 | 0.5451 | 0.7405 | 0.7407 | | 0.5042 | 11.76 | 600 | 0.5385 | 0.7254 | 0.7296 | | 0.4591 | 15.69 | 800 | 0.5251 | 0.7574 | 0.7580 | | 0.4232 | 19.61 | 1000 | 0.5538 | 0.7651 | 0.7654 | | 0.3842 | 23.53 | 1200 | 0.6002 | 0.7579 | 0.7580 | | 0.3437 | 27.45 | 1400 | 0.5976 | 0.7667 | 0.7667 | | 0.3056 | 31.37 | 1600 | 0.7249 | 0.7505 | 0.7519 | | 0.2826 | 35.29 | 1800 | 0.7146 | 0.7504 | 0.7506 | | 0.2464 | 39.22 | 2000 | 0.7265 | 0.7507 | 0.7506 | | 0.2234 | 43.14 | 2200 | 0.7985 | 0.7519 | 0.7519 | | 0.2081 | 47.06 | 2400 | 0.8031 | 0.7567 | 0.7568 | | 0.1827 | 50.98 | 2600 | 0.8586 | 0.7566 | 0.7568 | | 0.1681 | 54.9 | 2800 | 0.9018 | 0.7456 | 0.7457 | | 0.1544 | 58.82 | 3000 | 0.9888 | 0.7405 | 0.7407 | | 0.1384 | 62.75 | 3200 | 1.0131 | 0.7494 | 0.7494 | | 0.132 | 66.67 | 3400 | 1.0273 | 0.7579 | 0.7580 | | 0.12 | 70.59 | 3600 | 1.0749 | 0.7403 | 0.7407 | | 0.1132 | 74.51 | 3800 | 1.0950 | 0.7358 | 0.7358 | | 0.1097 | 78.43 | 4000 | 1.1436 | 0.7378 | 0.7383 | | 0.099 | 82.35 | 4200 | 1.1471 | 0.7334 | 0.7333 | | 0.0953 | 86.27 | 4400 | 1.2057 | 0.7407 | 0.7407 | | 0.0929 | 90.2 | 4600 | 1.1777 | 0.7419 | 0.7420 | | 0.0891 | 94.12 | 4800 | 1.1411 | 0.7430 | 0.7432 | | 0.0793 | 98.04 | 5000 | 1.2071 | 0.7445 | 0.7444 | | 0.0767 | 101.96 | 5200 | 1.1752 | 0.7383 | 0.7383 | | 0.0773 | 105.88 | 5400 | 1.1790 | 0.7370 | 0.7370 | | 0.0706 | 109.8 | 5600 | 1.2906 | 0.7456 | 0.7457 | | 0.0666 | 113.73 | 5800 | 1.2703 | 0.7382 | 0.7383 | | 0.0649 | 117.65 | 6000 | 1.2504 | 0.7457 | 0.7457 | | 0.0625 | 121.57 | 6200 | 1.2919 | 0.7432 | 0.7432 | | 0.0597 | 125.49 | 6400 | 1.3010 | 0.7290 | 0.7296 | | 0.0601 | 129.41 | 6600 | 1.3406 | 0.7480 | 0.7481 | | 0.0552 | 133.33 | 6800 | 1.3789 | 0.7395 | 0.7395 | | 0.0568 | 137.25 | 7000 | 1.3155 | 0.7432 | 0.7432 | | 0.0524 | 141.18 | 7200 | 1.3413 | 0.7321 | 0.7321 | | 0.0507 | 145.1 | 7400 | 1.3864 | 0.7445 | 0.7444 | | 0.0483 | 149.02 | 7600 | 1.3598 | 0.7407 | 0.7407 | | 0.0474 | 152.94 | 7800 | 1.3785 | 0.7445 | 0.7444 | | 0.0417 | 156.86 | 8000 | 1.4386 | 0.7469 | 0.7469 | | 0.0474 | 160.78 | 8200 | 1.3778 | 0.7432 | 0.7432 | | 0.0443 | 164.71 | 8400 | 1.4425 | 0.7305 | 0.7309 | | 0.0456 | 168.63 | 8600 | 1.4659 | 0.7407 | 0.7407 | | 0.0446 | 172.55 | 8800 | 1.3911 | 0.7395 | 0.7395 | | 0.0402 | 176.47 | 9000 | 1.4602 | 0.7395 | 0.7395 | | 0.0412 | 180.39 | 9200 | 1.4362 | 0.7408 | 0.7407 | | 0.0387 | 184.31 | 9400 | 1.4318 | 0.7371 | 0.7370 | | 0.0423 | 188.24 | 9600 | 1.4263 | 0.7432 | 0.7432 | | 0.0407 | 192.16 | 9800 | 1.4117 | 0.7432 | 0.7432 | | 0.0403 | 196.08 | 10000 | 1.4189 | 0.7432 | 0.7432 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_mouse_0-seqsight_32768_512_43M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_0-seqsight_32768_512_43M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_32768_512_43M", "region:us" ]
null
2024-05-03T14:22:28+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_4-seqsight_32768_512_43M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_mouse_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_4) dataset. It achieves the following results on the evaluation set: - Loss: 0.5862 - F1 Score: 0.6914 - Accuracy: 0.6936 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6405 | 1.69 | 200 | 0.6056 | 0.6719 | 0.6729 | | 0.5998 | 3.39 | 400 | 0.5973 | 0.6791 | 0.6835 | | 0.5818 | 5.08 | 600 | 0.5752 | 0.6997 | 0.6999 | | 0.5719 | 6.78 | 800 | 0.5671 | 0.7079 | 0.7079 | | 0.5608 | 8.47 | 1000 | 0.5733 | 0.7042 | 0.7063 | | 0.549 | 10.17 | 1200 | 0.5616 | 0.7134 | 0.7138 | | 0.5429 | 11.86 | 1400 | 0.5570 | 0.7118 | 0.7122 | | 0.5324 | 13.56 | 1600 | 0.5578 | 0.7197 | 0.7201 | | 0.5292 | 15.25 | 1800 | 0.5583 | 0.7215 | 0.7217 | | 0.5207 | 16.95 | 2000 | 0.5601 | 0.7164 | 0.7191 | | 0.5177 | 18.64 | 2200 | 0.5518 | 0.7150 | 0.7164 | | 0.5134 | 20.34 | 2400 | 0.5586 | 0.7227 | 0.7238 | | 0.5099 | 22.03 | 2600 | 0.5743 | 0.7071 | 0.7106 | | 0.5023 | 23.73 | 2800 | 0.5625 | 0.7118 | 0.7127 | | 0.4974 | 25.42 | 3000 | 0.5680 | 0.7176 | 0.7180 | | 0.4955 | 27.12 | 3200 | 0.5591 | 0.7176 | 0.7180 | | 0.491 | 28.81 | 3400 | 0.5626 | 0.7133 | 0.7138 | | 0.4841 | 30.51 | 3600 | 0.5655 | 0.7211 | 0.7212 | | 0.4843 | 32.2 | 3800 | 0.5584 | 0.7158 | 0.7164 | | 0.4809 | 33.9 | 4000 | 0.5682 | 0.7143 | 0.7148 | | 0.4736 | 35.59 | 4200 | 0.5741 | 0.7139 | 0.7143 | | 0.4675 | 37.29 | 4400 | 0.5745 | 0.7195 | 0.7196 | | 0.4662 | 38.98 | 4600 | 0.5763 | 0.7148 | 0.7148 | | 0.4611 | 40.68 | 4800 | 0.5971 | 0.7091 | 0.7111 | | 0.4552 | 42.37 | 5000 | 0.5868 | 0.7163 | 0.7164 | | 0.4575 | 44.07 | 5200 | 0.5880 | 0.7150 | 0.7153 | | 0.4523 | 45.76 | 5400 | 0.5929 | 0.7136 | 0.7143 | | 0.4448 | 47.46 | 5600 | 0.6072 | 0.7191 | 0.7191 | | 0.4473 | 49.15 | 5800 | 0.5905 | 0.7169 | 0.7169 | | 0.4368 | 50.85 | 6000 | 0.5992 | 0.7099 | 0.7106 | | 0.4407 | 52.54 | 6200 | 0.6063 | 0.7170 | 0.7169 | | 0.4375 | 54.24 | 6400 | 0.5985 | 0.7130 | 0.7138 | | 0.4359 | 55.93 | 6600 | 0.6044 | 0.7117 | 0.7116 | | 0.4266 | 57.63 | 6800 | 0.6082 | 0.7105 | 0.7106 | | 0.4247 | 59.32 | 7000 | 0.6136 | 0.7142 | 0.7148 | | 0.4311 | 61.02 | 7200 | 0.6047 | 0.7123 | 0.7127 | | 0.4237 | 62.71 | 7400 | 0.6130 | 0.7117 | 0.7116 | | 0.4221 | 64.41 | 7600 | 0.6120 | 0.7097 | 0.7100 | | 0.4217 | 66.1 | 7800 | 0.6192 | 0.7104 | 0.7106 | | 0.4162 | 67.8 | 8000 | 0.6190 | 0.7128 | 0.7127 | | 0.4173 | 69.49 | 8200 | 0.6208 | 0.7078 | 0.7079 | | 0.4155 | 71.19 | 8400 | 0.6187 | 0.7054 | 0.7053 | | 0.4177 | 72.88 | 8600 | 0.6202 | 0.7032 | 0.7031 | | 0.4113 | 74.58 | 8800 | 0.6260 | 0.7075 | 0.7074 | | 0.4127 | 76.27 | 9000 | 0.6312 | 0.7054 | 0.7058 | | 0.4145 | 77.97 | 9200 | 0.6249 | 0.7051 | 0.7053 | | 0.41 | 79.66 | 9400 | 0.6259 | 0.7090 | 0.7090 | | 0.4075 | 81.36 | 9600 | 0.6283 | 0.7063 | 0.7063 | | 0.4097 | 83.05 | 9800 | 0.6286 | 0.7051 | 0.7053 | | 0.409 | 84.75 | 10000 | 0.6285 | 0.7046 | 0.7047 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_mouse_4-seqsight_32768_512_43M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_4-seqsight_32768_512_43M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_32768_512_43M", "region:us" ]
null
2024-05-03T14:22:28+00:00
feature-extraction
transformers
{}
lyghter/2ch-wt-24-01-01-27480-3849-mel-512-pool-128e7-1x008-1-1
null
[ "transformers", "safetensors", "xlm-roberta", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2024-05-03T14:22:36+00:00
null
null
{}
DioBot2000/output
null
[ "region:us" ]
null
2024-05-03T14:23:08+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
ilyi/whisper-large-v2-lora
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-03T14:23:36+00:00
null
peft
## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0
{"library_name": "peft"}
waelChafei/llama2-latest-summarization
null
[ "peft", "region:us" ]
null
2024-05-03T14:24:13+00:00
feature-extraction
sentence-transformers
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case: This model is designed to support various applications in natural language processing and understanding. ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from transformers import AutoModel, AutoTokenizer llm_name = "jina-embeddings-v2-base-en-03052024-0swb-webapp" tokenizer = AutoTokenizer.from_pretrained(llm_name) model = AutoModel.from_pretrained(llm_name, trust_remote_code=True) tokens = tokenizer("Your text here", return_tensors="pt") embedding = model(**tokens) ```
{"language": ["en"], "license": "apache-2.0", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb", "Events", "Meetups", "Networking", "Social", "Community"], "datasets": ["fine-tuned/jina-embeddings-v2-base-en-03052024-0swb-webapp", "allenai/c4"], "pipeline_tag": "feature-extraction"}
fine-tuned/jina-embeddings-v2-base-en-03052024-0swb-webapp
null
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "Events", "Meetups", "Networking", "Social", "Community", "custom_code", "en", "dataset:fine-tuned/jina-embeddings-v2-base-en-03052024-0swb-webapp", "dataset:allenai/c4", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-03T14:24:58+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
ibivibiv/llama3-8b-ultrafeedback-dpo-v2
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T14:25:06+00:00
null
null
{}
amnaijaz/Regression_Ridge
null
[ "region:us" ]
null
2024-05-03T14:25:54+00:00
text-classification
sklearn
# Model description [More Information Needed] ## Intended uses & limitations [More Information Needed] ## Training Procedure [More Information Needed] ### Hyperparameters <details> <summary> Click to expand </summary> | Hyperparameter | Value | |------------------------------|---------------------------------------------------------------------------------------------------------| | memory | | | steps | [('transformer', MultiSkillTransformer()), ('clf', SVC(C=1, class_weight='balanced', kernel='linear'))] | | verbose | False | | transformer | MultiSkillTransformer() | | clf | SVC(C=1, class_weight='balanced', kernel='linear') | | clf__C | 1 | | clf__break_ties | False | | clf__cache_size | 200 | | clf__class_weight | balanced | | clf__coef0 | 0.0 | | clf__decision_function_shape | ovr | | clf__degree | 3 | | clf__gamma | scale | | clf__kernel | linear | | clf__max_iter | -1 | | clf__probability | False | | clf__random_state | | | clf__shrinking | True | | clf__tol | 0.001 | | clf__verbose | False | </details> ### Model Plot <style>#sk-container-id-1 {/* Definition of color scheme common for light and dark mode */--sklearn-color-text: black;--sklearn-color-line: gray;/* Definition of color scheme for unfitted estimators */--sklearn-color-unfitted-level-0: #fff5e6;--sklearn-color-unfitted-level-1: #f6e4d2;--sklearn-color-unfitted-level-2: #ffe0b3;--sklearn-color-unfitted-level-3: chocolate;/* Definition of color scheme for fitted estimators */--sklearn-color-fitted-level-0: #f0f8ff;--sklearn-color-fitted-level-1: #d4ebff;--sklearn-color-fitted-level-2: #b3dbfd;--sklearn-color-fitted-level-3: cornflowerblue;/* Specific color for light theme */--sklearn-color-text-on-default-background: var(--sg-text-color, var(--theme-code-foreground, var(--jp-content-font-color1, black)));--sklearn-color-background: var(--sg-background-color, var(--theme-background, var(--jp-layout-color0, white)));--sklearn-color-border-box: var(--sg-text-color, var(--theme-code-foreground, var(--jp-content-font-color1, black)));--sklearn-color-icon: #696969;@media (prefers-color-scheme: dark) {/* Redefinition of color scheme for dark theme */--sklearn-color-text-on-default-background: var(--sg-text-color, var(--theme-code-foreground, var(--jp-content-font-color1, white)));--sklearn-color-background: var(--sg-background-color, var(--theme-background, var(--jp-layout-color0, #111)));--sklearn-color-border-box: var(--sg-text-color, var(--theme-code-foreground, var(--jp-content-font-color1, white)));--sklearn-color-icon: #878787;} }#sk-container-id-1 {color: var(--sklearn-color-text); }#sk-container-id-1 pre {padding: 0; }#sk-container-id-1 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px; }#sk-container-id-1 div.sk-dashed-wrapped {border: 1px dashed var(--sklearn-color-line);margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: var(--sklearn-color-background); }#sk-container-id-1 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }`but bootstrap.min.css set `[hidden] { display: none !important; }`so we also need the `!important` here to be able to override thedefault hidden behavior on the sphinx rendered scikit-learn.org.See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative; }#sk-container-id-1 div.sk-text-repr-fallback {display: none; }div.sk-parallel-item, div.sk-serial, div.sk-item {/* draw centered vertical line to link estimators */background-image: linear-gradient(var(--sklearn-color-text-on-default-background), var(--sklearn-color-text-on-default-background));background-size: 2px 100%;background-repeat: no-repeat;background-position: center center; }/* Parallel-specific style estimator block */#sk-container-id-1 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 2px solid var(--sklearn-color-text-on-default-background);flex-grow: 1; }#sk-container-id-1 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: var(--sklearn-color-background);position: relative; }#sk-container-id-1 div.sk-parallel-item {display: flex;flex-direction: column; }#sk-container-id-1 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%; }#sk-container-id-1 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%; }#sk-container-id-1 div.sk-parallel-item:only-child::after {width: 0; }/* Serial-specific style estimator block */#sk-container-id-1 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: var(--sklearn-color-background);padding-right: 1em;padding-left: 1em; }/* Toggleable style: style used for estimator/Pipeline/ColumnTransformer box that is clickable and can be expanded/collapsed. - Pipeline and ColumnTransformer use this feature and define the default style - Estimators will overwrite some part of the style using the `sk-estimator` class *//* Pipeline and ColumnTransformer style (default) */#sk-container-id-1 div.sk-toggleable {/* Default theme specific background. It is overwritten whether we have aspecific estimator or a Pipeline/ColumnTransformer */background-color: var(--sklearn-color-background); }/* Toggleable label */ #sk-container-id-1 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.5em;box-sizing: border-box;text-align: center; }#sk-container-id-1 label.sk-toggleable__label-arrow:before {/* Arrow on the left of the label */content: "▸";float: left;margin-right: 0.25em;color: var(--sklearn-color-icon); }#sk-container-id-1 label.sk-toggleable__label-arrow:hover:before {color: var(--sklearn-color-text); }/* Toggleable content - dropdown */#sk-container-id-1 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;/* unfitted */background-color: var(--sklearn-color-unfitted-level-0); }#sk-container-id-1 div.sk-toggleable__content.fitted {/* fitted */background-color: var(--sklearn-color-fitted-level-0); }#sk-container-id-1 div.sk-toggleable__content pre {margin: 0.2em;border-radius: 0.25em;color: var(--sklearn-color-text);/* unfitted */background-color: var(--sklearn-color-unfitted-level-0); }#sk-container-id-1 div.sk-toggleable__content.fitted pre {/* unfitted */background-color: var(--sklearn-color-fitted-level-0); }#sk-container-id-1 input.sk-toggleable__control:checked~div.sk-toggleable__content {/* Expand drop-down */max-height: 200px;max-width: 100%;overflow: auto; }#sk-container-id-1 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾"; }/* Pipeline/ColumnTransformer-specific style */#sk-container-id-1 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {color: var(--sklearn-color-text);background-color: var(--sklearn-color-unfitted-level-2); }#sk-container-id-1 div.sk-label.fitted input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: var(--sklearn-color-fitted-level-2); }/* Estimator-specific style *//* Colorize estimator box */ #sk-container-id-1 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {/* unfitted */background-color: var(--sklearn-color-unfitted-level-2); }#sk-container-id-1 div.sk-estimator.fitted input.sk-toggleable__control:checked~label.sk-toggleable__label {/* fitted */background-color: var(--sklearn-color-fitted-level-2); }#sk-container-id-1 div.sk-label label.sk-toggleable__label, #sk-container-id-1 div.sk-label label {/* The background is the default theme color */color: var(--sklearn-color-text-on-default-background); }/* On hover, darken the color of the background */ #sk-container-id-1 div.sk-label:hover label.sk-toggleable__label {color: var(--sklearn-color-text);background-color: var(--sklearn-color-unfitted-level-2); }/* Label box, darken color on hover, fitted */ #sk-container-id-1 div.sk-label.fitted:hover label.sk-toggleable__label.fitted {color: var(--sklearn-color-text);background-color: var(--sklearn-color-fitted-level-2); }/* Estimator label */#sk-container-id-1 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em; }#sk-container-id-1 div.sk-label-container {text-align: center; }/* Estimator-specific */ #sk-container-id-1 div.sk-estimator {font-family: monospace;border: 1px dotted var(--sklearn-color-border-box);border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;/* unfitted */background-color: var(--sklearn-color-unfitted-level-0); }#sk-container-id-1 div.sk-estimator.fitted {/* fitted */background-color: var(--sklearn-color-fitted-level-0); }/* on hover */ #sk-container-id-1 div.sk-estimator:hover {/* unfitted */background-color: var(--sklearn-color-unfitted-level-2); }#sk-container-id-1 div.sk-estimator.fitted:hover {/* fitted */background-color: var(--sklearn-color-fitted-level-2); }/* Specification for estimator info (e.g. "i" and "?") *//* Common style for "i" and "?" */.sk-estimator-doc-link, a:link.sk-estimator-doc-link, a:visited.sk-estimator-doc-link {float: right;font-size: smaller;line-height: 1em;font-family: monospace;background-color: var(--sklearn-color-background);border-radius: 1em;height: 1em;width: 1em;text-decoration: none !important;margin-left: 1ex;/* unfitted */border: var(--sklearn-color-unfitted-level-1) 1pt solid;color: var(--sklearn-color-unfitted-level-1); }.sk-estimator-doc-link.fitted, a:link.sk-estimator-doc-link.fitted, a:visited.sk-estimator-doc-link.fitted {/* fitted */border: var(--sklearn-color-fitted-level-1) 1pt solid;color: var(--sklearn-color-fitted-level-1); }/* On hover */ div.sk-estimator:hover .sk-estimator-doc-link:hover, .sk-estimator-doc-link:hover, div.sk-label-container:hover .sk-estimator-doc-link:hover, .sk-estimator-doc-link:hover {/* unfitted */background-color: var(--sklearn-color-unfitted-level-3);color: var(--sklearn-color-background);text-decoration: none; }div.sk-estimator.fitted:hover .sk-estimator-doc-link.fitted:hover, .sk-estimator-doc-link.fitted:hover, div.sk-label-container:hover .sk-estimator-doc-link.fitted:hover, .sk-estimator-doc-link.fitted:hover {/* fitted */background-color: var(--sklearn-color-fitted-level-3);color: var(--sklearn-color-background);text-decoration: none; }/* Span, style for the box shown on hovering the info icon */ .sk-estimator-doc-link span {display: none;z-index: 9999;position: relative;font-weight: normal;right: .2ex;padding: .5ex;margin: .5ex;width: min-content;min-width: 20ex;max-width: 50ex;color: var(--sklearn-color-text);box-shadow: 2pt 2pt 4pt #999;/* unfitted */background: var(--sklearn-color-unfitted-level-0);border: .5pt solid var(--sklearn-color-unfitted-level-3); }.sk-estimator-doc-link.fitted span {/* fitted */background: var(--sklearn-color-fitted-level-0);border: var(--sklearn-color-fitted-level-3); }.sk-estimator-doc-link:hover span {display: block; }/* "?"-specific style due to the `<a>` HTML tag */#sk-container-id-1 a.estimator_doc_link {float: right;font-size: 1rem;line-height: 1em;font-family: monospace;background-color: var(--sklearn-color-background);border-radius: 1rem;height: 1rem;width: 1rem;text-decoration: none;/* unfitted */color: var(--sklearn-color-unfitted-level-1);border: var(--sklearn-color-unfitted-level-1) 1pt solid; }#sk-container-id-1 a.estimator_doc_link.fitted {/* fitted */border: var(--sklearn-color-fitted-level-1) 1pt solid;color: var(--sklearn-color-fitted-level-1); }/* On hover */ #sk-container-id-1 a.estimator_doc_link:hover {/* unfitted */background-color: var(--sklearn-color-unfitted-level-3);color: var(--sklearn-color-background);text-decoration: none; }#sk-container-id-1 a.estimator_doc_link.fitted:hover {/* fitted */background-color: var(--sklearn-color-fitted-level-3); } </style><div id="sk-container-id-1" class="sk-top-container" style="overflow: auto;"><div class="sk-text-repr-fallback"><pre>Pipeline(steps=[(&#x27;transformer&#x27;, MultiSkillTransformer()),(&#x27;clf&#x27;, SVC(C=1, class_weight=&#x27;balanced&#x27;, kernel=&#x27;linear&#x27;))])</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label fitted sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-1" type="checkbox" ><label for="sk-estimator-id-1" class="sk-toggleable__label fitted sk-toggleable__label-arrow fitted">&nbsp;&nbsp;Pipeline<a class="sk-estimator-doc-link fitted" rel="noreferrer" target="_blank" href="https://scikit-learn.org/1.4/modules/generated/sklearn.pipeline.Pipeline.html">?<span>Documentation for Pipeline</span></a><span class="sk-estimator-doc-link fitted">i<span>Fitted</span></span></label><div class="sk-toggleable__content fitted"><pre>Pipeline(steps=[(&#x27;transformer&#x27;, MultiSkillTransformer()),(&#x27;clf&#x27;, SVC(C=1, class_weight=&#x27;balanced&#x27;, kernel=&#x27;linear&#x27;))])</pre></div> </div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator fitted sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-2" type="checkbox" ><label for="sk-estimator-id-2" class="sk-toggleable__label fitted sk-toggleable__label-arrow fitted">MultiSkillTransformer</label><div class="sk-toggleable__content fitted"><pre>MultiSkillTransformer()</pre></div> </div></div><div class="sk-item"><div class="sk-estimator fitted sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-3" type="checkbox" ><label for="sk-estimator-id-3" class="sk-toggleable__label fitted sk-toggleable__label-arrow fitted">&nbsp;SVC<a class="sk-estimator-doc-link fitted" rel="noreferrer" target="_blank" href="https://scikit-learn.org/1.4/modules/generated/sklearn.svm.SVC.html">?<span>Documentation for SVC</span></a></label><div class="sk-toggleable__content fitted"><pre>SVC(C=1, class_weight=&#x27;balanced&#x27;, kernel=&#x27;linear&#x27;)</pre></div> </div></div></div></div></div></div> ## Evaluation Results [More Information Needed] # How to Get Started with the Model [More Information Needed] # Model Card Authors This model card is written by following authors: [More Information Needed] # Model Card Contact You can contact the model card authors through following channels: [More Information Needed] # Citation Below you can find information related to citation. **BibTeX:** ``` [More Information Needed] ``` # model_description Support Vector Machine (SVM) trained to predict if a skill span is a multiskill or not. # Classification Report <details> <summary> Click to expand </summary> | index | precision | recall | f1-score | support | |--------------|-------------|----------|------------|------------| | SKILL | 0.871795 | 0.871795 | 0.871795 | 78 | | MULTISKILL | 0.89899 | 0.89899 | 0.89899 | 99 | | accuracy | 0.887006 | 0.887006 | 0.887006 | 0.887006 | | macro avg | 0.885392 | 0.885392 | 0.885392 | 177 | | weighted avg | 0.887006 | 0.887006 | 0.887006 | 177 | </details>
{"license": "mit", "library_name": "sklearn", "tags": ["sklearn", "skops", "text-classification"], "model_format": "pickle", "model_file": "multiskill-classifier8lnyq0he.pkl"}
nestauk/multiskill-classifier
null
[ "sklearn", "skops", "text-classification", "license:mit", "region:us" ]
null
2024-05-03T14:26:13+00:00
null
null
{}
MellaTech/ahadu
null
[ "region:us" ]
null
2024-05-03T14:27:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Prototipo_3_EMI This model is a fine-tuned version of [dccuchile/distilbert-base-spanish-uncased](https://huggingface.co/dccuchile/distilbert-base-spanish-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2540 - Accuracy: 0.5423 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 250 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 1.2186 | 0.1778 | 200 | 1.1556 | 0.4803 | | 1.1345 | 0.3556 | 400 | 1.0663 | 0.525 | | 1.102 | 0.5333 | 600 | 1.0479 | 0.5293 | | 1.1325 | 0.7111 | 800 | 1.0483 | 0.5353 | | 1.1211 | 0.8889 | 1000 | 1.0337 | 0.521 | | 0.9736 | 1.0667 | 1200 | 1.0006 | 0.5503 | | 0.9428 | 1.2444 | 1400 | 1.0214 | 0.5523 | | 0.9095 | 1.4222 | 1600 | 1.0174 | 0.555 | | 0.9806 | 1.6 | 1800 | 1.0155 | 0.5527 | | 0.969 | 1.7778 | 2000 | 1.0043 | 0.5547 | | 0.9112 | 1.9556 | 2200 | 1.0050 | 0.5537 | | 0.7557 | 2.1333 | 2400 | 1.0496 | 0.5607 | | 0.8212 | 2.3111 | 2600 | 1.0494 | 0.5597 | | 0.7695 | 2.4889 | 2800 | 1.0510 | 0.5687 | | 0.7648 | 2.6667 | 3000 | 1.0513 | 0.5603 | | 0.8232 | 2.8444 | 3200 | 1.0316 | 0.563 | | 0.6288 | 3.0222 | 3400 | 1.0883 | 0.5503 | | 0.6736 | 3.2 | 3600 | 1.1232 | 0.548 | | 0.682 | 3.3778 | 3800 | 1.1695 | 0.543 | | 0.6682 | 3.5556 | 4000 | 1.1608 | 0.5427 | | 0.6516 | 3.7333 | 4200 | 1.1636 | 0.545 | | 0.6731 | 3.9111 | 4400 | 1.1694 | 0.5403 | | 0.5388 | 4.0889 | 4600 | 1.2120 | 0.544 | | 0.5663 | 4.2667 | 4800 | 1.2278 | 0.544 | | 0.5579 | 4.4444 | 5000 | 1.2439 | 0.538 | | 0.5216 | 4.6222 | 5200 | 1.2507 | 0.5427 | | 0.4634 | 4.8 | 5400 | 1.2531 | 0.5393 | | 0.5359 | 4.9778 | 5600 | 1.2540 | 0.5423 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "dccuchile/distilbert-base-spanish-uncased", "model-index": [{"name": "Prototipo_3_EMI", "results": []}]}
Armandodelca/Prototipo_3_EMI
null
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:dccuchile/distilbert-base-spanish-uncased", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T14:28:13+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
golf2248/38v8mr7
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T14:28:35+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
OwOpeepeepoopoo/herewegoagain17
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T14:30:29+00:00
text-generation
transformers
# nbeerbower/slerp-bophades-truthy-math-mistral-7B AWQ - Model creator: [nbeerbower](https://huggingface.co/nbeerbower) - Original model: [slerp-bophades-truthy-math-mistral-7B](https://huggingface.co/nbeerbower/slerp-bophades-truthy-math-mistral-7B) ## How to use ### Install the necessary packages ```bash pip install --upgrade autoawq autoawq-kernels ``` ### Example Python code ```python from awq import AutoAWQForCausalLM from transformers import AutoTokenizer, TextStreamer model_path = "solidrust/slerp-bophades-truthy-math-mistral-7B-AWQ" system_message = "You are slerp-bophades-truthy-math-mistral-7B, incarnated as a powerful AI. You were created by nbeerbower." # Load model model = AutoAWQForCausalLM.from_quantized(model_path, fuse_layers=True) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) # Convert prompt to tokens prompt_template = """\ <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant""" prompt = "You're standing on the surface of the Earth. "\ "You walk one mile south, one mile west and one mile north. "\ "You end up exactly where you started. Where are you?" tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt), return_tensors='pt').input_ids.cuda() # Generate output generation_output = model.generate(tokens, streamer=streamer, max_new_tokens=512) ``` ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
{"library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"}
solidrust/slerp-bophades-truthy-math-mistral-7B-AWQ
null
[ "transformers", "safetensors", "mistral", "text-generation", "4-bit", "AWQ", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T14:30:34+00:00
null
null
{"license": "apache-2.0"}
sd-concepts-library/arrcr2
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-03T14:30:38+00:00
text-classification
transformers
{}
EndlessRecurrence/rasa-diet-classifier
null
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T14:30:55+00:00
text-classification
transformers
{}
Agatha1911/fine-tuned-BERT-model-text-classification
null
[ "transformers", "safetensors", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T14:31:33+00:00
text-generation
transformers
# flammenai/flammen15X-mistral-7B AWQ - Model creator: [flammenai](https://huggingface.co/flammenai) - Original model: [flammen15X-mistral-7B](https://huggingface.co/flammenai/flammen15X-mistral-7B) ## How to use ### Install the necessary packages ```bash pip install --upgrade autoawq autoawq-kernels ``` ### Example Python code ```python from awq import AutoAWQForCausalLM from transformers import AutoTokenizer, TextStreamer model_path = "solidrust/flammen15X-mistral-7B-AWQ" system_message = "You are flammen15X-mistral-7B, incarnated as a powerful AI. You were created by flammenai." # Load model model = AutoAWQForCausalLM.from_quantized(model_path, fuse_layers=True) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) # Convert prompt to tokens prompt_template = """\ <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant""" prompt = "You're standing on the surface of the Earth. "\ "You walk one mile south, one mile west and one mile north. "\ "You end up exactly where you started. Where are you?" tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt), return_tensors='pt').input_ids.cuda() # Generate output generation_output = model.generate(tokens, streamer=streamer, max_new_tokens=512) ``` ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
{"library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"}
solidrust/flammen15X-mistral-7B-AWQ
null
[ "transformers", "safetensors", "mistral", "text-generation", "4-bit", "AWQ", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T14:31:52+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_3-seqsight_32768_512_43M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_mouse_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_3) dataset. It achieves the following results on the evaluation set: - Loss: 0.7623 - F1 Score: 0.8115 - Accuracy: 0.8117 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.555 | 13.33 | 200 | 0.4424 | 0.7699 | 0.7699 | | 0.3629 | 26.67 | 400 | 0.3846 | 0.8284 | 0.8285 | | 0.2668 | 40.0 | 600 | 0.4172 | 0.8159 | 0.8159 | | 0.2103 | 53.33 | 800 | 0.4708 | 0.8282 | 0.8285 | | 0.1661 | 66.67 | 1000 | 0.5590 | 0.8117 | 0.8117 | | 0.133 | 80.0 | 1200 | 0.5868 | 0.8449 | 0.8452 | | 0.1091 | 93.33 | 1400 | 0.6657 | 0.7988 | 0.7992 | | 0.0976 | 106.67 | 1600 | 0.6974 | 0.8116 | 0.8117 | | 0.0782 | 120.0 | 1800 | 0.7658 | 0.7866 | 0.7866 | | 0.0674 | 133.33 | 2000 | 0.7918 | 0.7991 | 0.7992 | | 0.058 | 146.67 | 2200 | 0.7780 | 0.8199 | 0.8201 | | 0.0569 | 160.0 | 2400 | 0.8391 | 0.8117 | 0.8117 | | 0.0455 | 173.33 | 2600 | 0.9611 | 0.8158 | 0.8159 | | 0.0415 | 186.67 | 2800 | 0.9182 | 0.8158 | 0.8159 | | 0.0414 | 200.0 | 3000 | 0.9038 | 0.8234 | 0.8243 | | 0.0363 | 213.33 | 3200 | 0.9562 | 0.8200 | 0.8201 | | 0.0379 | 226.67 | 3400 | 0.9500 | 0.8115 | 0.8117 | | 0.0295 | 240.0 | 3600 | 0.9630 | 0.8074 | 0.8075 | | 0.0272 | 253.33 | 3800 | 0.9748 | 0.8033 | 0.8033 | | 0.0274 | 266.67 | 4000 | 0.9672 | 0.8159 | 0.8159 | | 0.0255 | 280.0 | 4200 | 0.9223 | 0.8367 | 0.8368 | | 0.0221 | 293.33 | 4400 | 1.0377 | 0.8158 | 0.8159 | | 0.0219 | 306.67 | 4600 | 0.9940 | 0.8241 | 0.8243 | | 0.0213 | 320.0 | 4800 | 0.9899 | 0.8242 | 0.8243 | | 0.0227 | 333.33 | 5000 | 0.9454 | 0.8242 | 0.8243 | | 0.018 | 346.67 | 5200 | 1.0548 | 0.8159 | 0.8159 | | 0.0196 | 360.0 | 5400 | 1.0513 | 0.8157 | 0.8159 | | 0.0177 | 373.33 | 5600 | 1.0282 | 0.8243 | 0.8243 | | 0.0162 | 386.67 | 5800 | 1.1252 | 0.8283 | 0.8285 | | 0.0135 | 400.0 | 6000 | 1.1668 | 0.8201 | 0.8201 | | 0.0161 | 413.33 | 6200 | 1.1143 | 0.8283 | 0.8285 | | 0.0148 | 426.67 | 6400 | 1.1679 | 0.8242 | 0.8243 | | 0.0152 | 440.0 | 6600 | 1.1737 | 0.8239 | 0.8243 | | 0.0137 | 453.33 | 6800 | 1.1314 | 0.8240 | 0.8243 | | 0.0109 | 466.67 | 7000 | 1.1744 | 0.8200 | 0.8201 | | 0.0143 | 480.0 | 7200 | 1.1200 | 0.8449 | 0.8452 | | 0.0105 | 493.33 | 7400 | 1.1679 | 0.8284 | 0.8285 | | 0.0118 | 506.67 | 7600 | 1.1535 | 0.8284 | 0.8285 | | 0.011 | 520.0 | 7800 | 1.1421 | 0.8284 | 0.8285 | | 0.0114 | 533.33 | 8000 | 1.1654 | 0.8242 | 0.8243 | | 0.0104 | 546.67 | 8200 | 1.2144 | 0.8201 | 0.8201 | | 0.0086 | 560.0 | 8400 | 1.2283 | 0.8243 | 0.8243 | | 0.0096 | 573.33 | 8600 | 1.2220 | 0.8326 | 0.8326 | | 0.0094 | 586.67 | 8800 | 1.1976 | 0.8243 | 0.8243 | | 0.0102 | 600.0 | 9000 | 1.1849 | 0.8284 | 0.8285 | | 0.0091 | 613.33 | 9200 | 1.1721 | 0.8201 | 0.8201 | | 0.0085 | 626.67 | 9400 | 1.2208 | 0.8201 | 0.8201 | | 0.0085 | 640.0 | 9600 | 1.2105 | 0.8243 | 0.8243 | | 0.0092 | 653.33 | 9800 | 1.2049 | 0.8243 | 0.8243 | | 0.0084 | 666.67 | 10000 | 1.2050 | 0.8201 | 0.8201 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_mouse_3-seqsight_32768_512_43M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_3-seqsight_32768_512_43M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_32768_512_43M", "region:us" ]
null
2024-05-03T14:31:56+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_3-seqsight_32768_512_43M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_mouse_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_3) dataset. It achieves the following results on the evaluation set: - Loss: 0.7451 - F1 Score: 0.8032 - Accuracy: 0.8033 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6013 | 13.33 | 200 | 0.5247 | 0.7186 | 0.7197 | | 0.4902 | 26.67 | 400 | 0.4775 | 0.7612 | 0.7615 | | 0.4277 | 40.0 | 600 | 0.4410 | 0.7858 | 0.7866 | | 0.3736 | 53.33 | 800 | 0.3964 | 0.8325 | 0.8326 | | 0.3278 | 66.67 | 1000 | 0.3932 | 0.8283 | 0.8285 | | 0.2964 | 80.0 | 1200 | 0.3917 | 0.8326 | 0.8326 | | 0.2727 | 93.33 | 1400 | 0.3935 | 0.8325 | 0.8326 | | 0.2528 | 106.67 | 1600 | 0.4000 | 0.8242 | 0.8243 | | 0.2295 | 120.0 | 1800 | 0.4109 | 0.8325 | 0.8326 | | 0.2162 | 133.33 | 2000 | 0.4344 | 0.8243 | 0.8243 | | 0.2028 | 146.67 | 2200 | 0.4479 | 0.8243 | 0.8243 | | 0.1902 | 160.0 | 2400 | 0.4792 | 0.8158 | 0.8159 | | 0.1799 | 173.33 | 2600 | 0.5062 | 0.8113 | 0.8117 | | 0.1687 | 186.67 | 2800 | 0.4971 | 0.8326 | 0.8326 | | 0.1651 | 200.0 | 3000 | 0.5520 | 0.8152 | 0.8159 | | 0.1582 | 213.33 | 3200 | 0.5134 | 0.8200 | 0.8201 | | 0.1462 | 226.67 | 3400 | 0.5592 | 0.8325 | 0.8326 | | 0.1394 | 240.0 | 3600 | 0.5829 | 0.8200 | 0.8201 | | 0.1332 | 253.33 | 3800 | 0.5996 | 0.8070 | 0.8075 | | 0.131 | 266.67 | 4000 | 0.5894 | 0.8200 | 0.8201 | | 0.1216 | 280.0 | 4200 | 0.6010 | 0.8199 | 0.8201 | | 0.1182 | 293.33 | 4400 | 0.6116 | 0.8199 | 0.8201 | | 0.1167 | 306.67 | 4600 | 0.6240 | 0.8368 | 0.8368 | | 0.1116 | 320.0 | 4800 | 0.6361 | 0.8199 | 0.8201 | | 0.1171 | 333.33 | 5000 | 0.6405 | 0.8072 | 0.8075 | | 0.105 | 346.67 | 5200 | 0.6458 | 0.8326 | 0.8326 | | 0.1044 | 360.0 | 5400 | 0.6778 | 0.8072 | 0.8075 | | 0.1013 | 373.33 | 5600 | 0.6605 | 0.8242 | 0.8243 | | 0.0976 | 386.67 | 5800 | 0.6878 | 0.8242 | 0.8243 | | 0.0928 | 400.0 | 6000 | 0.7017 | 0.8368 | 0.8368 | | 0.0991 | 413.33 | 6200 | 0.6914 | 0.8199 | 0.8201 | | 0.0961 | 426.67 | 6400 | 0.7004 | 0.8241 | 0.8243 | | 0.0989 | 440.0 | 6600 | 0.6938 | 0.8197 | 0.8201 | | 0.0901 | 453.33 | 6800 | 0.7306 | 0.8198 | 0.8201 | | 0.0907 | 466.67 | 7000 | 0.7197 | 0.8157 | 0.8159 | | 0.0872 | 480.0 | 7200 | 0.7188 | 0.8158 | 0.8159 | | 0.0857 | 493.33 | 7400 | 0.7279 | 0.8198 | 0.8201 | | 0.0829 | 506.67 | 7600 | 0.7321 | 0.8155 | 0.8159 | | 0.0832 | 520.0 | 7800 | 0.7509 | 0.8158 | 0.8159 | | 0.0836 | 533.33 | 8000 | 0.7534 | 0.8114 | 0.8117 | | 0.0788 | 546.67 | 8200 | 0.7651 | 0.8033 | 0.8033 | | 0.0816 | 560.0 | 8400 | 0.7707 | 0.8158 | 0.8159 | | 0.0776 | 573.33 | 8600 | 0.7720 | 0.8199 | 0.8201 | | 0.0792 | 586.67 | 8800 | 0.7674 | 0.8116 | 0.8117 | | 0.0791 | 600.0 | 9000 | 0.7599 | 0.8115 | 0.8117 | | 0.081 | 613.33 | 9200 | 0.7487 | 0.8115 | 0.8117 | | 0.076 | 626.67 | 9400 | 0.7571 | 0.8115 | 0.8117 | | 0.0739 | 640.0 | 9600 | 0.7649 | 0.8115 | 0.8117 | | 0.0763 | 653.33 | 9800 | 0.7626 | 0.8200 | 0.8201 | | 0.0776 | 666.67 | 10000 | 0.7622 | 0.8200 | 0.8201 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_mouse_3-seqsight_32768_512_43M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_3-seqsight_32768_512_43M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_32768_512_43M", "region:us" ]
null
2024-05-03T14:31:56+00:00
text-classification
transformers
This model is a finetuned version of ```FacebookAI/xlm-roberta-base``` model in the **Bengali** and **Hindi** languages. The dataset used is a Kaggle Dataset - [Modified-hate-speech-bengali-hindi](https://www.kaggle.com/datasets/abirmondal/modified-hate-speech-bengali-hindi) This model can classify Bengali and Hindi texts into the following 5 classes: - defamation - hate - non-hate - violence - vulgar
{"language": ["bn", "hi"], "license": "apache-2.0"}
kingshukroy/xlm-roberta-base-hate-speech-ben-hin
null
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "bn", "hi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T14:31:56+00:00
null
null
# This repository is for 2110446 Data Science and Data Engineering project. # Project Structure ### DataGathering This folder is mainly for collecting data from other sources. #### 1. GoogleGeocoding Gather geolocation of affiliation name using google geocoding API. | Directory/File | Description | | -------------- | --------------------------------------------------------------------- | | geocode.csv | contains geolocation of each affiliation (aff_id, aff_name, lat, lon) | #### 2. ScopusAPI Query search scopus data using scopus API | Directory/File | Description | | --------------------- | --------------------------------------------------------- | | example_api_data.json | contains example data fetched from search API from scopus |
{}
when-my-cat-learn-datasci/datasci-final-project-2024
null
[ "region:us" ]
null
2024-05-03T14:32:03+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_2-seqsight_32768_512_43M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_mouse_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_2) dataset. It achieves the following results on the evaluation set: - Loss: 0.3732 - F1 Score: 0.8658 - Accuracy: 0.8659 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.454 | 9.52 | 200 | 0.3612 | 0.8322 | 0.8323 | | 0.3199 | 19.05 | 400 | 0.3407 | 0.8383 | 0.8384 | | 0.2886 | 28.57 | 600 | 0.3299 | 0.8414 | 0.8415 | | 0.2693 | 38.1 | 800 | 0.3207 | 0.8476 | 0.8476 | | 0.2506 | 47.62 | 1000 | 0.3171 | 0.8414 | 0.8415 | | 0.2324 | 57.14 | 1200 | 0.3145 | 0.8627 | 0.8628 | | 0.2145 | 66.67 | 1400 | 0.3219 | 0.8719 | 0.8720 | | 0.203 | 76.19 | 1600 | 0.3365 | 0.8626 | 0.8628 | | 0.1876 | 85.71 | 1800 | 0.3527 | 0.8624 | 0.8628 | | 0.1755 | 95.24 | 2000 | 0.3300 | 0.8719 | 0.8720 | | 0.1659 | 104.76 | 2200 | 0.3487 | 0.8750 | 0.875 | | 0.1492 | 114.29 | 2400 | 0.3723 | 0.8749 | 0.875 | | 0.1454 | 123.81 | 2600 | 0.3813 | 0.8780 | 0.8780 | | 0.1383 | 133.33 | 2800 | 0.3897 | 0.8780 | 0.8780 | | 0.128 | 142.86 | 3000 | 0.4353 | 0.8748 | 0.875 | | 0.1228 | 152.38 | 3200 | 0.4500 | 0.8687 | 0.8689 | | 0.1203 | 161.9 | 3400 | 0.4626 | 0.8716 | 0.8720 | | 0.1174 | 171.43 | 3600 | 0.4549 | 0.8747 | 0.875 | | 0.1129 | 180.95 | 3800 | 0.4300 | 0.8688 | 0.8689 | | 0.1106 | 190.48 | 4000 | 0.4420 | 0.8780 | 0.8780 | | 0.102 | 200.0 | 4200 | 0.4784 | 0.8656 | 0.8659 | | 0.0992 | 209.52 | 4400 | 0.5022 | 0.8655 | 0.8659 | | 0.0932 | 219.05 | 4600 | 0.4891 | 0.8688 | 0.8689 | | 0.0941 | 228.57 | 4800 | 0.4837 | 0.8718 | 0.8720 | | 0.0893 | 238.1 | 5000 | 0.5372 | 0.8625 | 0.8628 | | 0.0894 | 247.62 | 5200 | 0.5028 | 0.8687 | 0.8689 | | 0.0866 | 257.14 | 5400 | 0.5503 | 0.8686 | 0.8689 | | 0.0843 | 266.67 | 5600 | 0.5312 | 0.8626 | 0.8628 | | 0.0811 | 276.19 | 5800 | 0.5398 | 0.8656 | 0.8659 | | 0.0804 | 285.71 | 6000 | 0.5454 | 0.8687 | 0.8689 | | 0.0777 | 295.24 | 6200 | 0.5398 | 0.8656 | 0.8659 | | 0.076 | 304.76 | 6400 | 0.5483 | 0.8656 | 0.8659 | | 0.0761 | 314.29 | 6600 | 0.5600 | 0.8687 | 0.8689 | | 0.0745 | 323.81 | 6800 | 0.5477 | 0.8718 | 0.8720 | | 0.073 | 333.33 | 7000 | 0.5647 | 0.8656 | 0.8659 | | 0.072 | 342.86 | 7200 | 0.5622 | 0.8656 | 0.8659 | | 0.0699 | 352.38 | 7400 | 0.5793 | 0.8687 | 0.8689 | | 0.0699 | 361.9 | 7600 | 0.5593 | 0.8718 | 0.8720 | | 0.0718 | 371.43 | 7800 | 0.5890 | 0.8687 | 0.8689 | | 0.0662 | 380.95 | 8000 | 0.5791 | 0.8687 | 0.8689 | | 0.0688 | 390.48 | 8200 | 0.5699 | 0.8656 | 0.8659 | | 0.0675 | 400.0 | 8400 | 0.5741 | 0.8687 | 0.8689 | | 0.068 | 409.52 | 8600 | 0.5728 | 0.8687 | 0.8689 | | 0.065 | 419.05 | 8800 | 0.5704 | 0.8687 | 0.8689 | | 0.065 | 428.57 | 9000 | 0.5923 | 0.8687 | 0.8689 | | 0.0645 | 438.1 | 9200 | 0.5827 | 0.8687 | 0.8689 | | 0.0647 | 447.62 | 9400 | 0.5951 | 0.8687 | 0.8689 | | 0.0614 | 457.14 | 9600 | 0.5868 | 0.8687 | 0.8689 | | 0.066 | 466.67 | 9800 | 0.5821 | 0.8687 | 0.8689 | | 0.0651 | 476.19 | 10000 | 0.5859 | 0.8687 | 0.8689 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_mouse_2-seqsight_32768_512_43M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_2-seqsight_32768_512_43M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_32768_512_43M", "region:us" ]
null
2024-05-03T14:32:04+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_2-seqsight_32768_512_43M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_mouse_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_2) dataset. It achieves the following results on the evaluation set: - Loss: 0.9727 - F1 Score: 0.8841 - Accuracy: 0.8841 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.3985 | 9.52 | 200 | 0.3315 | 0.8533 | 0.8537 | | 0.2679 | 19.05 | 400 | 0.3289 | 0.8472 | 0.8476 | | 0.2203 | 28.57 | 600 | 0.3222 | 0.8689 | 0.8689 | | 0.1727 | 38.1 | 800 | 0.3561 | 0.8810 | 0.8811 | | 0.1381 | 47.62 | 1000 | 0.4427 | 0.8871 | 0.8872 | | 0.1136 | 57.14 | 1200 | 0.5343 | 0.8683 | 0.8689 | | 0.0914 | 66.67 | 1400 | 0.5551 | 0.8841 | 0.8841 | | 0.0812 | 76.19 | 1600 | 0.5796 | 0.8685 | 0.8689 | | 0.0656 | 85.71 | 1800 | 0.5796 | 0.8811 | 0.8811 | | 0.0589 | 95.24 | 2000 | 0.6605 | 0.8748 | 0.875 | | 0.0542 | 104.76 | 2200 | 0.7135 | 0.8716 | 0.8720 | | 0.0452 | 114.29 | 2400 | 0.6585 | 0.8841 | 0.8841 | | 0.0402 | 123.81 | 2600 | 0.7272 | 0.8809 | 0.8811 | | 0.0373 | 133.33 | 2800 | 0.7292 | 0.8748 | 0.875 | | 0.0307 | 142.86 | 3000 | 0.6910 | 0.8749 | 0.875 | | 0.0302 | 152.38 | 3200 | 0.7471 | 0.8810 | 0.8811 | | 0.0279 | 161.9 | 3400 | 0.7656 | 0.8687 | 0.8689 | | 0.0257 | 171.43 | 3600 | 0.7229 | 0.8810 | 0.8811 | | 0.0231 | 180.95 | 3800 | 0.6869 | 0.8749 | 0.875 | | 0.022 | 190.48 | 4000 | 0.7588 | 0.8872 | 0.8872 | | 0.0197 | 200.0 | 4200 | 0.8807 | 0.8778 | 0.8780 | | 0.0184 | 209.52 | 4400 | 0.8446 | 0.8748 | 0.875 | | 0.0175 | 219.05 | 4600 | 0.7668 | 0.8780 | 0.8780 | | 0.0184 | 228.57 | 4800 | 0.7068 | 0.8811 | 0.8811 | | 0.0169 | 238.1 | 5000 | 0.7346 | 0.8841 | 0.8841 | | 0.0175 | 247.62 | 5200 | 0.7376 | 0.8811 | 0.8811 | | 0.0134 | 257.14 | 5400 | 0.7959 | 0.8810 | 0.8811 | | 0.0116 | 266.67 | 5600 | 0.8400 | 0.8809 | 0.8811 | | 0.0152 | 276.19 | 5800 | 0.8025 | 0.8780 | 0.8780 | | 0.0142 | 285.71 | 6000 | 0.7747 | 0.8749 | 0.875 | | 0.0119 | 295.24 | 6200 | 0.7905 | 0.8750 | 0.875 | | 0.0118 | 304.76 | 6400 | 0.8220 | 0.8750 | 0.875 | | 0.0111 | 314.29 | 6600 | 0.8242 | 0.8719 | 0.8720 | | 0.011 | 323.81 | 6800 | 0.7764 | 0.8811 | 0.8811 | | 0.0092 | 333.33 | 7000 | 0.8180 | 0.8719 | 0.8720 | | 0.0114 | 342.86 | 7200 | 0.7806 | 0.8811 | 0.8811 | | 0.01 | 352.38 | 7400 | 0.8355 | 0.8809 | 0.8811 | | 0.0085 | 361.9 | 7600 | 0.8427 | 0.8840 | 0.8841 | | 0.0095 | 371.43 | 7800 | 0.8382 | 0.8841 | 0.8841 | | 0.0075 | 380.95 | 8000 | 0.8342 | 0.8871 | 0.8872 | | 0.0072 | 390.48 | 8200 | 0.8775 | 0.8902 | 0.8902 | | 0.0093 | 400.0 | 8400 | 0.8227 | 0.8841 | 0.8841 | | 0.0075 | 409.52 | 8600 | 0.8249 | 0.8841 | 0.8841 | | 0.0074 | 419.05 | 8800 | 0.8233 | 0.8811 | 0.8811 | | 0.0082 | 428.57 | 9000 | 0.8354 | 0.8841 | 0.8841 | | 0.0082 | 438.1 | 9200 | 0.8328 | 0.8811 | 0.8811 | | 0.0064 | 447.62 | 9400 | 0.8586 | 0.8810 | 0.8811 | | 0.006 | 457.14 | 9600 | 0.8593 | 0.8810 | 0.8811 | | 0.0057 | 466.67 | 9800 | 0.8524 | 0.8811 | 0.8811 | | 0.006 | 476.19 | 10000 | 0.8596 | 0.8810 | 0.8811 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_mouse_2-seqsight_32768_512_43M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_2-seqsight_32768_512_43M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_32768_512_43M", "region:us" ]
null
2024-05-03T14:32:07+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_3-seqsight_32768_512_43M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_mouse_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_3) dataset. It achieves the following results on the evaluation set: - Loss: 1.3538 - F1 Score: 0.8368 - Accuracy: 0.8368 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.4945 | 13.33 | 200 | 0.3604 | 0.8487 | 0.8494 | | 0.25 | 26.67 | 400 | 0.4718 | 0.8410 | 0.8410 | | 0.1496 | 40.0 | 600 | 0.5533 | 0.8448 | 0.8452 | | 0.0947 | 53.33 | 800 | 0.6449 | 0.8200 | 0.8201 | | 0.0622 | 66.67 | 1000 | 0.8307 | 0.8242 | 0.8243 | | 0.0399 | 80.0 | 1200 | 0.9556 | 0.8033 | 0.8033 | | 0.0309 | 93.33 | 1400 | 0.8286 | 0.8285 | 0.8285 | | 0.0296 | 106.67 | 1600 | 1.0323 | 0.8282 | 0.8285 | | 0.022 | 120.0 | 1800 | 0.9626 | 0.8282 | 0.8285 | | 0.0206 | 133.33 | 2000 | 0.9417 | 0.8195 | 0.8201 | | 0.019 | 146.67 | 2200 | 0.9371 | 0.8410 | 0.8410 | | 0.0156 | 160.0 | 2400 | 0.9515 | 0.8408 | 0.8410 | | 0.0147 | 173.33 | 2600 | 1.1014 | 0.8157 | 0.8159 | | 0.0117 | 186.67 | 2800 | 0.9790 | 0.8326 | 0.8326 | | 0.0142 | 200.0 | 3000 | 0.9529 | 0.8159 | 0.8159 | | 0.0078 | 213.33 | 3200 | 1.1415 | 0.8325 | 0.8326 | | 0.011 | 226.67 | 3400 | 1.0354 | 0.8200 | 0.8201 | | 0.0079 | 240.0 | 3600 | 1.1383 | 0.8240 | 0.8243 | | 0.0084 | 253.33 | 3800 | 0.9914 | 0.8408 | 0.8410 | | 0.0074 | 266.67 | 4000 | 1.2147 | 0.8242 | 0.8243 | | 0.0064 | 280.0 | 4200 | 1.1424 | 0.8409 | 0.8410 | | 0.0067 | 293.33 | 4400 | 1.0934 | 0.8368 | 0.8368 | | 0.0065 | 306.67 | 4600 | 1.0865 | 0.8326 | 0.8326 | | 0.006 | 320.0 | 4800 | 1.3061 | 0.8282 | 0.8285 | | 0.0071 | 333.33 | 5000 | 1.0516 | 0.8617 | 0.8619 | | 0.0052 | 346.67 | 5200 | 1.0900 | 0.8410 | 0.8410 | | 0.0049 | 360.0 | 5400 | 1.0540 | 0.8410 | 0.8410 | | 0.0039 | 373.33 | 5600 | 1.0045 | 0.8367 | 0.8368 | | 0.0039 | 386.67 | 5800 | 1.1885 | 0.8368 | 0.8368 | | 0.0031 | 400.0 | 6000 | 1.3306 | 0.8282 | 0.8285 | | 0.0044 | 413.33 | 6200 | 1.2414 | 0.8284 | 0.8285 | | 0.0035 | 426.67 | 6400 | 1.1990 | 0.8325 | 0.8326 | | 0.003 | 440.0 | 6600 | 1.2469 | 0.8449 | 0.8452 | | 0.0034 | 453.33 | 6800 | 1.3184 | 0.8324 | 0.8326 | | 0.0038 | 466.67 | 7000 | 1.3149 | 0.8368 | 0.8368 | | 0.0023 | 480.0 | 7200 | 1.3285 | 0.8410 | 0.8410 | | 0.0026 | 493.33 | 7400 | 1.3405 | 0.8368 | 0.8368 | | 0.0023 | 506.67 | 7600 | 1.4812 | 0.8236 | 0.8243 | | 0.0035 | 520.0 | 7800 | 1.2972 | 0.8326 | 0.8326 | | 0.0024 | 533.33 | 8000 | 1.1751 | 0.8368 | 0.8368 | | 0.0016 | 546.67 | 8200 | 1.2535 | 0.8368 | 0.8368 | | 0.0018 | 560.0 | 8400 | 1.2629 | 0.8410 | 0.8410 | | 0.0013 | 573.33 | 8600 | 1.2869 | 0.8451 | 0.8452 | | 0.0012 | 586.67 | 8800 | 1.3690 | 0.8326 | 0.8326 | | 0.0013 | 600.0 | 9000 | 1.4424 | 0.8410 | 0.8410 | | 0.0012 | 613.33 | 9200 | 1.3960 | 0.8452 | 0.8452 | | 0.0023 | 626.67 | 9400 | 1.3912 | 0.8325 | 0.8326 | | 0.0011 | 640.0 | 9600 | 1.4004 | 0.8326 | 0.8326 | | 0.0013 | 653.33 | 9800 | 1.3783 | 0.8409 | 0.8410 | | 0.0011 | 666.67 | 10000 | 1.3855 | 0.8409 | 0.8410 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_mouse_3-seqsight_32768_512_43M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_3-seqsight_32768_512_43M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_32768_512_43M", "region:us" ]
null
2024-05-03T14:32:26+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_2-seqsight_32768_512_43M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_mouse_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_2) dataset. It achieves the following results on the evaluation set: - Loss: 0.9569 - F1 Score: 0.8841 - Accuracy: 0.8841 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.3637 | 9.52 | 200 | 0.3209 | 0.8597 | 0.8598 | | 0.2171 | 19.05 | 400 | 0.3311 | 0.8657 | 0.8659 | | 0.1481 | 28.57 | 600 | 0.4566 | 0.8528 | 0.8537 | | 0.0949 | 38.1 | 800 | 0.5511 | 0.8686 | 0.8689 | | 0.0633 | 47.62 | 1000 | 0.5569 | 0.8718 | 0.8720 | | 0.0448 | 57.14 | 1200 | 0.6590 | 0.8561 | 0.8567 | | 0.0366 | 66.67 | 1400 | 0.7107 | 0.8626 | 0.8628 | | 0.0287 | 76.19 | 1600 | 0.7808 | 0.8709 | 0.8720 | | 0.0218 | 85.71 | 1800 | 0.6591 | 0.8745 | 0.875 | | 0.02 | 95.24 | 2000 | 0.6549 | 0.8626 | 0.8628 | | 0.014 | 104.76 | 2200 | 0.6894 | 0.8564 | 0.8567 | | 0.0139 | 114.29 | 2400 | 0.6611 | 0.8718 | 0.8720 | | 0.01 | 123.81 | 2600 | 0.8001 | 0.8687 | 0.8689 | | 0.0122 | 133.33 | 2800 | 0.6653 | 0.8688 | 0.8689 | | 0.0086 | 142.86 | 3000 | 0.7451 | 0.8777 | 0.8780 | | 0.0089 | 152.38 | 3200 | 0.7197 | 0.8716 | 0.8720 | | 0.0084 | 161.9 | 3400 | 0.7224 | 0.8655 | 0.8659 | | 0.0054 | 171.43 | 3600 | 0.7890 | 0.8716 | 0.8720 | | 0.0081 | 180.95 | 3800 | 0.6031 | 0.8932 | 0.8933 | | 0.0082 | 190.48 | 4000 | 0.7296 | 0.8747 | 0.875 | | 0.0045 | 200.0 | 4200 | 0.8088 | 0.8713 | 0.8720 | | 0.0062 | 209.52 | 4400 | 0.7307 | 0.8654 | 0.8659 | | 0.0045 | 219.05 | 4600 | 0.7611 | 0.8682 | 0.8689 | | 0.0059 | 228.57 | 4800 | 0.6720 | 0.9055 | 0.9055 | | 0.005 | 238.1 | 5000 | 0.7237 | 0.8809 | 0.8811 | | 0.0053 | 247.62 | 5200 | 0.7023 | 0.8779 | 0.8780 | | 0.0057 | 257.14 | 5400 | 0.6694 | 0.8838 | 0.8841 | | 0.0022 | 266.67 | 5600 | 0.7388 | 0.8749 | 0.875 | | 0.0028 | 276.19 | 5800 | 0.7900 | 0.8685 | 0.8689 | | 0.0053 | 285.71 | 6000 | 0.8860 | 0.8618 | 0.8628 | | 0.0036 | 295.24 | 6200 | 0.8072 | 0.8684 | 0.8689 | | 0.0027 | 304.76 | 6400 | 0.9184 | 0.8713 | 0.8720 | | 0.0023 | 314.29 | 6600 | 0.7922 | 0.8810 | 0.8811 | | 0.003 | 323.81 | 6800 | 0.7958 | 0.8809 | 0.8811 | | 0.0022 | 333.33 | 7000 | 0.8633 | 0.8778 | 0.8780 | | 0.0024 | 342.86 | 7200 | 0.8159 | 0.8901 | 0.8902 | | 0.0021 | 352.38 | 7400 | 0.8138 | 0.8777 | 0.8780 | | 0.0018 | 361.9 | 7600 | 0.7816 | 0.8902 | 0.8902 | | 0.0013 | 371.43 | 7800 | 0.8696 | 0.8685 | 0.8689 | | 0.0014 | 380.95 | 8000 | 1.0004 | 0.8684 | 0.8689 | | 0.0017 | 390.48 | 8200 | 0.9455 | 0.8778 | 0.8780 | | 0.0022 | 400.0 | 8400 | 0.9596 | 0.8683 | 0.8689 | | 0.0017 | 409.52 | 8600 | 0.8629 | 0.8809 | 0.8811 | | 0.0018 | 419.05 | 8800 | 0.8128 | 0.8810 | 0.8811 | | 0.0011 | 428.57 | 9000 | 0.9574 | 0.8775 | 0.8780 | | 0.0013 | 438.1 | 9200 | 0.8122 | 0.8871 | 0.8872 | | 0.0012 | 447.62 | 9400 | 0.8705 | 0.8779 | 0.8780 | | 0.0009 | 457.14 | 9600 | 0.8878 | 0.8778 | 0.8780 | | 0.0011 | 466.67 | 9800 | 0.8747 | 0.8840 | 0.8841 | | 0.0008 | 476.19 | 10000 | 0.8885 | 0.8778 | 0.8780 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_mouse_2-seqsight_32768_512_43M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_2-seqsight_32768_512_43M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_32768_512_43M", "region:us" ]
null
2024-05-03T14:33:01+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_splice_reconstructed-seqsight_32768_512_43M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_splice_reconstructed](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_splice_reconstructed) dataset. It achieves the following results on the evaluation set: - Loss: 0.3190 - F1 Score: 0.8803 - Accuracy: 0.8799 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.955 | 0.7 | 200 | 0.8736 | 0.5246 | 0.5747 | | 0.7473 | 1.4 | 400 | 0.5026 | 0.7909 | 0.7900 | | 0.4937 | 2.1 | 600 | 0.4401 | 0.8176 | 0.8167 | | 0.4529 | 2.8 | 800 | 0.4430 | 0.8182 | 0.8174 | | 0.4353 | 3.5 | 1000 | 0.3943 | 0.8450 | 0.8446 | | 0.4202 | 4.2 | 1200 | 0.4050 | 0.8374 | 0.8363 | | 0.4033 | 4.9 | 1400 | 0.4056 | 0.8434 | 0.8428 | | 0.3966 | 5.59 | 1600 | 0.3896 | 0.8496 | 0.8492 | | 0.3876 | 6.29 | 1800 | 0.4151 | 0.8315 | 0.8303 | | 0.3714 | 6.99 | 2000 | 0.3766 | 0.8560 | 0.8551 | | 0.3707 | 7.69 | 2200 | 0.3852 | 0.8527 | 0.8518 | | 0.3641 | 8.39 | 2400 | 0.3694 | 0.8636 | 0.8630 | | 0.3618 | 9.09 | 2600 | 0.3902 | 0.8499 | 0.8488 | | 0.3545 | 9.79 | 2800 | 0.3714 | 0.8630 | 0.8623 | | 0.3512 | 10.49 | 3000 | 0.3696 | 0.8614 | 0.8608 | | 0.3379 | 11.19 | 3200 | 0.3931 | 0.8535 | 0.8523 | | 0.3396 | 11.89 | 3400 | 0.3698 | 0.8625 | 0.8617 | | 0.3282 | 12.59 | 3600 | 0.3690 | 0.8606 | 0.8599 | | 0.3322 | 13.29 | 3800 | 0.3559 | 0.8674 | 0.8665 | | 0.3275 | 13.99 | 4000 | 0.3748 | 0.8580 | 0.8569 | | 0.3202 | 14.69 | 4200 | 0.3845 | 0.8518 | 0.8509 | | 0.3197 | 15.38 | 4400 | 0.3598 | 0.8666 | 0.8658 | | 0.3156 | 16.08 | 4600 | 0.3774 | 0.8582 | 0.8573 | | 0.314 | 16.78 | 4800 | 0.3483 | 0.8700 | 0.8694 | | 0.303 | 17.48 | 5000 | 0.3798 | 0.8573 | 0.8564 | | 0.3133 | 18.18 | 5200 | 0.3417 | 0.8740 | 0.8735 | | 0.3057 | 18.88 | 5400 | 0.3506 | 0.8693 | 0.8687 | | 0.2988 | 19.58 | 5600 | 0.3490 | 0.8725 | 0.8720 | | 0.3043 | 20.28 | 5800 | 0.3407 | 0.8749 | 0.8744 | | 0.2966 | 20.98 | 6000 | 0.3386 | 0.8758 | 0.8753 | | 0.2959 | 21.68 | 6200 | 0.3516 | 0.8700 | 0.8694 | | 0.289 | 22.38 | 6400 | 0.3414 | 0.8741 | 0.8735 | | 0.2896 | 23.08 | 6600 | 0.3473 | 0.8738 | 0.8731 | | 0.2931 | 23.78 | 6800 | 0.3315 | 0.8818 | 0.8814 | | 0.287 | 24.48 | 7000 | 0.3565 | 0.8689 | 0.8683 | | 0.287 | 25.17 | 7200 | 0.3469 | 0.8691 | 0.8683 | | 0.2843 | 25.87 | 7400 | 0.3434 | 0.8730 | 0.8724 | | 0.2799 | 26.57 | 7600 | 0.3439 | 0.8730 | 0.8724 | | 0.2782 | 27.27 | 7800 | 0.3462 | 0.8758 | 0.8753 | | 0.2818 | 27.97 | 8000 | 0.3415 | 0.8731 | 0.8724 | | 0.2815 | 28.67 | 8200 | 0.3467 | 0.8711 | 0.8705 | | 0.2753 | 29.37 | 8400 | 0.3368 | 0.8763 | 0.8757 | | 0.2735 | 30.07 | 8600 | 0.3414 | 0.8754 | 0.8748 | | 0.274 | 30.77 | 8800 | 0.3470 | 0.8723 | 0.8715 | | 0.2724 | 31.47 | 9000 | 0.3402 | 0.8750 | 0.8744 | | 0.2781 | 32.17 | 9200 | 0.3352 | 0.8772 | 0.8766 | | 0.2713 | 32.87 | 9400 | 0.3393 | 0.8750 | 0.8744 | | 0.2704 | 33.57 | 9600 | 0.3419 | 0.8742 | 0.8735 | | 0.2708 | 34.27 | 9800 | 0.3381 | 0.8759 | 0.8753 | | 0.2685 | 34.97 | 10000 | 0.3393 | 0.8748 | 0.8742 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_splice_reconstructed-seqsight_32768_512_43M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_splice_reconstructed-seqsight_32768_512_43M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_32768_512_43M", "region:us" ]
null
2024-05-03T14:33:04+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_splice_reconstructed-seqsight_32768_512_43M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_splice_reconstructed](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_splice_reconstructed) dataset. It achieves the following results on the evaluation set: - Loss: 0.3814 - F1 Score: 0.8453 - Accuracy: 0.8446 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.9745 | 0.7 | 200 | 0.9285 | 0.4080 | 0.5638 | | 0.9123 | 1.4 | 400 | 0.8733 | 0.5272 | 0.5868 | | 0.848 | 2.1 | 600 | 0.7708 | 0.6538 | 0.6578 | | 0.5987 | 2.8 | 800 | 0.5152 | 0.7857 | 0.7847 | | 0.5122 | 3.5 | 1000 | 0.4898 | 0.7984 | 0.7977 | | 0.4944 | 4.2 | 1200 | 0.4562 | 0.8124 | 0.8115 | | 0.4756 | 4.9 | 1400 | 0.4551 | 0.8166 | 0.8161 | | 0.467 | 5.59 | 1600 | 0.4449 | 0.8165 | 0.8159 | | 0.4573 | 6.29 | 1800 | 0.4526 | 0.8168 | 0.8159 | | 0.4418 | 6.99 | 2000 | 0.4425 | 0.8206 | 0.8196 | | 0.4422 | 7.69 | 2200 | 0.4192 | 0.8268 | 0.8262 | | 0.4368 | 8.39 | 2400 | 0.4300 | 0.8210 | 0.8203 | | 0.4357 | 9.09 | 2600 | 0.4314 | 0.8268 | 0.8257 | | 0.4291 | 9.79 | 2800 | 0.4175 | 0.8278 | 0.8270 | | 0.4281 | 10.49 | 3000 | 0.4251 | 0.8273 | 0.8264 | | 0.4181 | 11.19 | 3200 | 0.4334 | 0.8246 | 0.8235 | | 0.4198 | 11.89 | 3400 | 0.4158 | 0.8314 | 0.8306 | | 0.4128 | 12.59 | 3600 | 0.4143 | 0.8311 | 0.8303 | | 0.4151 | 13.29 | 3800 | 0.4052 | 0.8317 | 0.8308 | | 0.4138 | 13.99 | 4000 | 0.4280 | 0.8261 | 0.8251 | | 0.4053 | 14.69 | 4200 | 0.4194 | 0.8305 | 0.8297 | | 0.4049 | 15.38 | 4400 | 0.4147 | 0.8313 | 0.8303 | | 0.4087 | 16.08 | 4600 | 0.4333 | 0.8257 | 0.8246 | | 0.404 | 16.78 | 4800 | 0.4041 | 0.8351 | 0.8341 | | 0.3943 | 17.48 | 5000 | 0.4226 | 0.8296 | 0.8286 | | 0.4033 | 18.18 | 5200 | 0.3893 | 0.8444 | 0.8437 | | 0.3994 | 18.88 | 5400 | 0.4044 | 0.8357 | 0.8347 | | 0.3947 | 19.58 | 5600 | 0.3910 | 0.8417 | 0.8411 | | 0.3993 | 20.28 | 5800 | 0.3890 | 0.8437 | 0.8431 | | 0.3917 | 20.98 | 6000 | 0.3907 | 0.8411 | 0.8404 | | 0.3877 | 21.68 | 6200 | 0.4085 | 0.8353 | 0.8345 | | 0.3868 | 22.38 | 6400 | 0.3970 | 0.8391 | 0.8382 | | 0.3869 | 23.08 | 6600 | 0.3944 | 0.8417 | 0.8409 | | 0.3902 | 23.78 | 6800 | 0.3884 | 0.8432 | 0.8424 | | 0.3869 | 24.48 | 7000 | 0.4038 | 0.8356 | 0.8347 | | 0.39 | 25.17 | 7200 | 0.3901 | 0.8424 | 0.8415 | | 0.3843 | 25.87 | 7400 | 0.3998 | 0.8350 | 0.8341 | | 0.3805 | 26.57 | 7600 | 0.4001 | 0.8370 | 0.8360 | | 0.384 | 27.27 | 7800 | 0.3986 | 0.8378 | 0.8369 | | 0.3811 | 27.97 | 8000 | 0.3961 | 0.8400 | 0.8391 | | 0.386 | 28.67 | 8200 | 0.4003 | 0.8390 | 0.8382 | | 0.3772 | 29.37 | 8400 | 0.3913 | 0.8433 | 0.8424 | | 0.3804 | 30.07 | 8600 | 0.3866 | 0.8464 | 0.8457 | | 0.3798 | 30.77 | 8800 | 0.3976 | 0.8385 | 0.8376 | | 0.3768 | 31.47 | 9000 | 0.3913 | 0.8413 | 0.8404 | | 0.3848 | 32.17 | 9200 | 0.3911 | 0.8419 | 0.8411 | | 0.3749 | 32.87 | 9400 | 0.3915 | 0.8427 | 0.8420 | | 0.3758 | 33.57 | 9600 | 0.3960 | 0.8391 | 0.8382 | | 0.3729 | 34.27 | 9800 | 0.3913 | 0.8430 | 0.8422 | | 0.3765 | 34.97 | 10000 | 0.3934 | 0.8408 | 0.8400 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_splice_reconstructed-seqsight_32768_512_43M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_splice_reconstructed-seqsight_32768_512_43M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_32768_512_43M", "region:us" ]
null
2024-05-03T14:33:04+00:00
null
null
{"license": "apache-2.0"}
lusiayytt8/T-ponynai3
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-03T14:33:20+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
golf2248/yhcah4v
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T14:33:24+00:00
text-generation
transformers
# flammenai/flammen16-mistral-7B AWQ - Model creator: [flammenai](https://huggingface.co/flammenai) - Original model: [flammen16-mistral-7B](https://huggingface.co/flammenai/flammen16-mistral-7B) ## How to use ### Install the necessary packages ```bash pip install --upgrade autoawq autoawq-kernels ``` ### Example Python code ```python from awq import AutoAWQForCausalLM from transformers import AutoTokenizer, TextStreamer model_path = "solidrust/flammen16-mistral-7B-AWQ" system_message = "You are flammen16-mistral-7B, incarnated as a powerful AI. You were created by flammenai." # Load model model = AutoAWQForCausalLM.from_quantized(model_path, fuse_layers=True) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) # Convert prompt to tokens prompt_template = """\ <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant""" prompt = "You're standing on the surface of the Earth. "\ "You walk one mile south, one mile west and one mile north. "\ "You end up exactly where you started. Where are you?" tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt), return_tensors='pt').input_ids.cuda() # Generate output generation_output = model.generate(tokens, streamer=streamer, max_new_tokens=512) ``` ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
{"library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"}
solidrust/flammen16-mistral-7B-AWQ
null
[ "transformers", "safetensors", "mistral", "text-generation", "4-bit", "AWQ", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T14:33:38+00:00
feature-extraction
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
lyghter/2ch-wt-24-01-01-27480-3849-mel-512-pool-256e7-1x008-1-1
null
[ "transformers", "safetensors", "xlm-roberta", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-03T14:34:41+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # loha_fine_tuned_boolq This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5655 - Accuracy: 0.7778 - F1: 0.6806 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 400 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-------:|:----:|:---------------:|:--------:|:------:| | 0.6686 | 4.1667 | 50 | 0.6058 | 0.7778 | 0.6806 | | 0.661 | 8.3333 | 100 | 0.5835 | 0.7778 | 0.6806 | | 0.66 | 12.5 | 150 | 0.5765 | 0.7778 | 0.6806 | | 0.6685 | 16.6667 | 200 | 0.5708 | 0.7778 | 0.6806 | | 0.6634 | 20.8333 | 250 | 0.5677 | 0.7778 | 0.6806 | | 0.6573 | 25.0 | 300 | 0.5668 | 0.7778 | 0.6806 | | 0.6623 | 29.1667 | 350 | 0.5661 | 0.7778 | 0.6806 | | 0.6583 | 33.3333 | 400 | 0.5655 | 0.7778 | 0.6806 | ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.40.1 - Pytorch 2.1.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "google-bert/bert-base-uncased", "model-index": [{"name": "loha_fine_tuned_boolq", "results": []}]}
anzeo/loha_fine_tuned_boolq
null
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "license:apache-2.0", "region:us" ]
null
2024-05-03T14:36:13+00:00
text-classification
transformers
{}
onelock/distilbert-base-uncased-finetuned-emotion
null
[ "transformers", "safetensors", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T14:36:29+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
ferrazzipietro/LS_Llama-2-7b-hf_adapters_en.layer1_NoQuant_16_32_0.05_2_5e-05
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-03T14:37:15+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
cilantro9246/hhdxmho
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T14:37:19+00:00
text-generation
transformers
{}
mervezorlu/image-GIT-colors-model-v2
null
[ "transformers", "pytorch", "git", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T14:38:11+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
grrvk/palm-instance
null
[ "transformers", "safetensors", "mask2former", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-03T14:39:22+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-70m_niki-041a_imdb_random-token-1280_10-rounds_seed-4 This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-70m", "model-index": [{"name": "robust_llm_pythia-70m_niki-041a_imdb_random-token-1280_10-rounds_seed-4", "results": []}]}
AlignmentResearch/robust_llm_pythia-70m_niki-041a_imdb_random-token-1280_10-rounds_seed-4
null
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-70m", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T14:39:23+00:00
feature-extraction
transformers
{}
lyghter/2ch-wt-24-01-01-27480-3849-mel-512-pool-002e7-1x004-1-1
null
[ "transformers", "safetensors", "xlm-roberta", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2024-05-03T14:40:03+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_splice_reconstructed-seqsight_32768_512_43M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_splice_reconstructed](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_splice_reconstructed) dataset. It achieves the following results on the evaluation set: - Loss: 0.2944 - F1 Score: 0.8969 - Accuracy: 0.8965 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.9188 | 0.7 | 200 | 0.7935 | 0.5622 | 0.6243 | | 0.6099 | 1.4 | 400 | 0.4862 | 0.7908 | 0.7898 | | 0.4544 | 2.1 | 600 | 0.4126 | 0.8366 | 0.8356 | | 0.4102 | 2.8 | 800 | 0.4036 | 0.8410 | 0.8402 | | 0.3846 | 3.5 | 1000 | 0.3809 | 0.8568 | 0.8558 | | 0.3753 | 4.2 | 1200 | 0.3684 | 0.8555 | 0.8547 | | 0.3527 | 4.9 | 1400 | 0.3712 | 0.8595 | 0.8588 | | 0.3435 | 5.59 | 1600 | 0.3718 | 0.8604 | 0.8606 | | 0.3326 | 6.29 | 1800 | 0.3858 | 0.8508 | 0.8498 | | 0.315 | 6.99 | 2000 | 0.3399 | 0.8746 | 0.8742 | | 0.3118 | 7.69 | 2200 | 0.3469 | 0.8706 | 0.8700 | | 0.3028 | 8.39 | 2400 | 0.3435 | 0.8760 | 0.8757 | | 0.2986 | 9.09 | 2600 | 0.3606 | 0.8704 | 0.8696 | | 0.2917 | 9.79 | 2800 | 0.3505 | 0.8754 | 0.8748 | | 0.2865 | 10.49 | 3000 | 0.3394 | 0.8789 | 0.8783 | | 0.276 | 11.19 | 3200 | 0.3682 | 0.8661 | 0.8652 | | 0.2786 | 11.89 | 3400 | 0.3400 | 0.8816 | 0.8810 | | 0.2643 | 12.59 | 3600 | 0.3552 | 0.8745 | 0.8740 | | 0.2619 | 13.29 | 3800 | 0.3251 | 0.8830 | 0.8825 | | 0.2603 | 13.99 | 4000 | 0.3561 | 0.8706 | 0.8696 | | 0.2511 | 14.69 | 4200 | 0.3451 | 0.8765 | 0.8757 | | 0.2533 | 15.38 | 4400 | 0.3412 | 0.8823 | 0.8819 | | 0.2457 | 16.08 | 4600 | 0.3587 | 0.8733 | 0.8726 | | 0.2401 | 16.78 | 4800 | 0.3349 | 0.8826 | 0.8821 | | 0.2311 | 17.48 | 5000 | 0.3627 | 0.8715 | 0.8707 | | 0.2391 | 18.18 | 5200 | 0.3327 | 0.8834 | 0.8829 | | 0.2315 | 18.88 | 5400 | 0.3346 | 0.8841 | 0.8836 | | 0.2267 | 19.58 | 5600 | 0.3367 | 0.8862 | 0.8858 | | 0.2274 | 20.28 | 5800 | 0.3212 | 0.8918 | 0.8915 | | 0.2223 | 20.98 | 6000 | 0.3144 | 0.8921 | 0.8917 | | 0.2185 | 21.68 | 6200 | 0.3224 | 0.8919 | 0.8915 | | 0.2143 | 22.38 | 6400 | 0.3305 | 0.8870 | 0.8865 | | 0.2132 | 23.08 | 6600 | 0.3316 | 0.8871 | 0.8867 | | 0.2133 | 23.78 | 6800 | 0.3171 | 0.8943 | 0.8939 | | 0.206 | 24.48 | 7000 | 0.3350 | 0.8855 | 0.8851 | | 0.2058 | 25.17 | 7200 | 0.3413 | 0.8843 | 0.8838 | | 0.2045 | 25.87 | 7400 | 0.3234 | 0.8893 | 0.8889 | | 0.2025 | 26.57 | 7600 | 0.3352 | 0.8870 | 0.8865 | | 0.1978 | 27.27 | 7800 | 0.3420 | 0.8867 | 0.8862 | | 0.1981 | 27.97 | 8000 | 0.3323 | 0.8902 | 0.8897 | | 0.1971 | 28.67 | 8200 | 0.3340 | 0.8891 | 0.8886 | | 0.1926 | 29.37 | 8400 | 0.3222 | 0.8899 | 0.8895 | | 0.1915 | 30.07 | 8600 | 0.3282 | 0.8925 | 0.8922 | | 0.1907 | 30.77 | 8800 | 0.3352 | 0.8909 | 0.8904 | | 0.1877 | 31.47 | 9000 | 0.3349 | 0.8905 | 0.8900 | | 0.1916 | 32.17 | 9200 | 0.3329 | 0.8907 | 0.8902 | | 0.1826 | 32.87 | 9400 | 0.3334 | 0.8917 | 0.8913 | | 0.1879 | 33.57 | 9600 | 0.3380 | 0.8894 | 0.8889 | | 0.1849 | 34.27 | 9800 | 0.3340 | 0.8898 | 0.8893 | | 0.184 | 34.97 | 10000 | 0.3356 | 0.8898 | 0.8893 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_splice_reconstructed-seqsight_32768_512_43M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_splice_reconstructed-seqsight_32768_512_43M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_32768_512_43M", "region:us" ]
null
2024-05-03T14:40:59+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_0-seqsight_32768_512_43M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_tf_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_0) dataset. It achieves the following results on the evaluation set: - Loss: 0.3686 - F1 Score: 0.8378 - Accuracy: 0.838 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5602 | 0.79 | 200 | 0.4708 | 0.7707 | 0.771 | | 0.4815 | 1.58 | 400 | 0.4658 | 0.7708 | 0.771 | | 0.4804 | 2.37 | 600 | 0.4645 | 0.7702 | 0.771 | | 0.4702 | 3.16 | 800 | 0.4669 | 0.7771 | 0.777 | | 0.4657 | 3.95 | 1000 | 0.4622 | 0.7780 | 0.778 | | 0.4641 | 4.74 | 1200 | 0.4698 | 0.7720 | 0.772 | | 0.4623 | 5.53 | 1400 | 0.4627 | 0.7731 | 0.773 | | 0.4552 | 6.32 | 1600 | 0.4624 | 0.7750 | 0.775 | | 0.4605 | 7.11 | 1800 | 0.4681 | 0.7789 | 0.779 | | 0.4595 | 7.91 | 2000 | 0.4625 | 0.7730 | 0.773 | | 0.4542 | 8.7 | 2200 | 0.4602 | 0.7780 | 0.778 | | 0.4525 | 9.49 | 2400 | 0.4544 | 0.7756 | 0.776 | | 0.4524 | 10.28 | 2600 | 0.4749 | 0.7754 | 0.776 | | 0.4505 | 11.07 | 2800 | 0.4669 | 0.7748 | 0.775 | | 0.4501 | 11.86 | 3000 | 0.4698 | 0.7709 | 0.771 | | 0.4502 | 12.65 | 3200 | 0.4697 | 0.7795 | 0.78 | | 0.4485 | 13.44 | 3400 | 0.4513 | 0.7860 | 0.786 | | 0.4476 | 14.23 | 3600 | 0.4519 | 0.7831 | 0.783 | | 0.4461 | 15.02 | 3800 | 0.4619 | 0.7799 | 0.78 | | 0.4474 | 15.81 | 4000 | 0.4591 | 0.7799 | 0.78 | | 0.4425 | 16.6 | 4200 | 0.4516 | 0.7811 | 0.781 | | 0.4458 | 17.39 | 4400 | 0.4671 | 0.7817 | 0.782 | | 0.4429 | 18.18 | 4600 | 0.4565 | 0.7809 | 0.781 | | 0.4412 | 18.97 | 4800 | 0.4664 | 0.7764 | 0.777 | | 0.4449 | 19.76 | 5000 | 0.4518 | 0.7810 | 0.781 | | 0.4425 | 20.55 | 5200 | 0.4483 | 0.7821 | 0.782 | | 0.4386 | 21.34 | 5400 | 0.4535 | 0.7830 | 0.783 | | 0.4392 | 22.13 | 5600 | 0.4586 | 0.7799 | 0.78 | | 0.4422 | 22.92 | 5800 | 0.4534 | 0.7820 | 0.782 | | 0.4357 | 23.72 | 6000 | 0.4632 | 0.7798 | 0.78 | | 0.4382 | 24.51 | 6200 | 0.4507 | 0.7840 | 0.784 | | 0.4415 | 25.3 | 6400 | 0.4554 | 0.7840 | 0.784 | | 0.4336 | 26.09 | 6600 | 0.4530 | 0.7850 | 0.785 | | 0.4353 | 26.88 | 6800 | 0.4604 | 0.7839 | 0.784 | | 0.4368 | 27.67 | 7000 | 0.4507 | 0.7801 | 0.78 | | 0.4372 | 28.46 | 7200 | 0.4485 | 0.7801 | 0.78 | | 0.4377 | 29.25 | 7400 | 0.4544 | 0.7850 | 0.785 | | 0.4395 | 30.04 | 7600 | 0.4516 | 0.7801 | 0.78 | | 0.4366 | 30.83 | 7800 | 0.4540 | 0.7880 | 0.788 | | 0.4378 | 31.62 | 8000 | 0.4494 | 0.7790 | 0.779 | | 0.4342 | 32.41 | 8200 | 0.4553 | 0.7860 | 0.786 | | 0.4315 | 33.2 | 8400 | 0.4560 | 0.7850 | 0.785 | | 0.437 | 33.99 | 8600 | 0.4525 | 0.7811 | 0.781 | | 0.4358 | 34.78 | 8800 | 0.4510 | 0.7811 | 0.781 | | 0.4303 | 35.57 | 9000 | 0.4537 | 0.7810 | 0.781 | | 0.4335 | 36.36 | 9200 | 0.4584 | 0.7850 | 0.785 | | 0.4381 | 37.15 | 9400 | 0.4580 | 0.7850 | 0.785 | | 0.4336 | 37.94 | 9600 | 0.4540 | 0.7840 | 0.784 | | 0.4343 | 38.74 | 9800 | 0.4554 | 0.788 | 0.788 | | 0.4349 | 39.53 | 10000 | 0.4550 | 0.7880 | 0.788 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_tf_0-seqsight_32768_512_43M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_0-seqsight_32768_512_43M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_32768_512_43M", "region:us" ]
null
2024-05-03T14:41:03+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_0-seqsight_32768_512_43M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_tf_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_0) dataset. It achieves the following results on the evaluation set: - Loss: 0.3670 - F1 Score: 0.8379 - Accuracy: 0.838 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5365 | 0.79 | 200 | 0.4690 | 0.7631 | 0.764 | | 0.4729 | 1.58 | 400 | 0.4592 | 0.7785 | 0.779 | | 0.4706 | 2.37 | 600 | 0.4590 | 0.7719 | 0.773 | | 0.4597 | 3.16 | 800 | 0.4564 | 0.7850 | 0.785 | | 0.456 | 3.95 | 1000 | 0.4520 | 0.7830 | 0.783 | | 0.4526 | 4.74 | 1200 | 0.4576 | 0.7830 | 0.783 | | 0.4497 | 5.53 | 1400 | 0.4657 | 0.7714 | 0.772 | | 0.4429 | 6.32 | 1600 | 0.4557 | 0.7727 | 0.773 | | 0.4464 | 7.11 | 1800 | 0.4691 | 0.7730 | 0.774 | | 0.4436 | 7.91 | 2000 | 0.4641 | 0.7715 | 0.772 | | 0.4381 | 8.7 | 2200 | 0.4490 | 0.7890 | 0.789 | | 0.4357 | 9.49 | 2400 | 0.4437 | 0.7839 | 0.784 | | 0.4356 | 10.28 | 2600 | 0.4569 | 0.7848 | 0.785 | | 0.4325 | 11.07 | 2800 | 0.4617 | 0.7724 | 0.773 | | 0.4314 | 11.86 | 3000 | 0.4716 | 0.7847 | 0.785 | | 0.4299 | 12.65 | 3200 | 0.4579 | 0.7827 | 0.783 | | 0.4259 | 13.44 | 3400 | 0.4478 | 0.7910 | 0.791 | | 0.4262 | 14.23 | 3600 | 0.4493 | 0.7979 | 0.798 | | 0.4243 | 15.02 | 3800 | 0.4594 | 0.7877 | 0.788 | | 0.4245 | 15.81 | 4000 | 0.4492 | 0.7838 | 0.784 | | 0.418 | 16.6 | 4200 | 0.4491 | 0.7831 | 0.783 | | 0.4219 | 17.39 | 4400 | 0.4660 | 0.7825 | 0.783 | | 0.4193 | 18.18 | 4600 | 0.4481 | 0.7920 | 0.792 | | 0.4168 | 18.97 | 4800 | 0.4636 | 0.7813 | 0.782 | | 0.4186 | 19.76 | 5000 | 0.4498 | 0.7909 | 0.791 | | 0.4187 | 20.55 | 5200 | 0.4455 | 0.7970 | 0.797 | | 0.4117 | 21.34 | 5400 | 0.4562 | 0.7854 | 0.786 | | 0.4139 | 22.13 | 5600 | 0.4638 | 0.7745 | 0.775 | | 0.4143 | 22.92 | 5800 | 0.4511 | 0.7859 | 0.786 | | 0.4071 | 23.72 | 6000 | 0.4612 | 0.7837 | 0.784 | | 0.4098 | 24.51 | 6200 | 0.4475 | 0.7909 | 0.791 | | 0.4147 | 25.3 | 6400 | 0.4531 | 0.7858 | 0.786 | | 0.4038 | 26.09 | 6600 | 0.4504 | 0.7961 | 0.796 | | 0.4061 | 26.88 | 6800 | 0.4573 | 0.7867 | 0.787 | | 0.4082 | 27.67 | 7000 | 0.4481 | 0.7960 | 0.796 | | 0.4072 | 28.46 | 7200 | 0.4474 | 0.7971 | 0.797 | | 0.4081 | 29.25 | 7400 | 0.4574 | 0.7898 | 0.79 | | 0.4089 | 30.04 | 7600 | 0.4573 | 0.7897 | 0.79 | | 0.4056 | 30.83 | 7800 | 0.4507 | 0.7920 | 0.792 | | 0.4047 | 31.62 | 8000 | 0.4455 | 0.7931 | 0.793 | | 0.4034 | 32.41 | 8200 | 0.4576 | 0.7938 | 0.794 | | 0.4008 | 33.2 | 8400 | 0.4555 | 0.7958 | 0.796 | | 0.4052 | 33.99 | 8600 | 0.4521 | 0.7939 | 0.794 | | 0.4031 | 34.78 | 8800 | 0.4488 | 0.7970 | 0.797 | | 0.398 | 35.57 | 9000 | 0.4528 | 0.7950 | 0.795 | | 0.4018 | 36.36 | 9200 | 0.4607 | 0.7835 | 0.784 | | 0.4043 | 37.15 | 9400 | 0.4594 | 0.7877 | 0.788 | | 0.401 | 37.94 | 9600 | 0.4531 | 0.7969 | 0.797 | | 0.403 | 38.74 | 9800 | 0.4550 | 0.7928 | 0.793 | | 0.4019 | 39.53 | 10000 | 0.4545 | 0.7938 | 0.794 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_tf_0-seqsight_32768_512_43M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_0-seqsight_32768_512_43M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_32768_512_43M", "region:us" ]
null
2024-05-03T14:41:03+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_0-seqsight_32768_512_43M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_tf_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_0) dataset. It achieves the following results on the evaluation set: - Loss: 0.3652 - F1 Score: 0.8384 - Accuracy: 0.839 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5168 | 0.79 | 200 | 0.4704 | 0.7720 | 0.772 | | 0.4668 | 1.58 | 400 | 0.4545 | 0.7810 | 0.781 | | 0.4609 | 2.37 | 600 | 0.4539 | 0.7782 | 0.779 | | 0.4497 | 3.16 | 800 | 0.4563 | 0.7888 | 0.789 | | 0.4438 | 3.95 | 1000 | 0.4478 | 0.7820 | 0.782 | | 0.439 | 4.74 | 1200 | 0.4575 | 0.7790 | 0.779 | | 0.4338 | 5.53 | 1400 | 0.4695 | 0.7697 | 0.771 | | 0.4262 | 6.32 | 1600 | 0.4528 | 0.7849 | 0.785 | | 0.4298 | 7.11 | 1800 | 0.4694 | 0.7839 | 0.785 | | 0.4251 | 7.91 | 2000 | 0.4612 | 0.7826 | 0.783 | | 0.4186 | 8.7 | 2200 | 0.4532 | 0.7910 | 0.791 | | 0.4135 | 9.49 | 2400 | 0.4460 | 0.8070 | 0.807 | | 0.4145 | 10.28 | 2600 | 0.4561 | 0.7835 | 0.784 | | 0.4106 | 11.07 | 2800 | 0.4560 | 0.7907 | 0.791 | | 0.4085 | 11.86 | 3000 | 0.4680 | 0.7936 | 0.794 | | 0.404 | 12.65 | 3200 | 0.4566 | 0.7887 | 0.789 | | 0.3998 | 13.44 | 3400 | 0.4553 | 0.7969 | 0.797 | | 0.3977 | 14.23 | 3600 | 0.4474 | 0.7978 | 0.798 | | 0.3957 | 15.02 | 3800 | 0.4564 | 0.7936 | 0.794 | | 0.3928 | 15.81 | 4000 | 0.4417 | 0.7969 | 0.797 | | 0.3856 | 16.6 | 4200 | 0.4596 | 0.7910 | 0.791 | | 0.3891 | 17.39 | 4400 | 0.4669 | 0.7914 | 0.792 | | 0.3839 | 18.18 | 4600 | 0.4529 | 0.7991 | 0.799 | | 0.3802 | 18.97 | 4800 | 0.4674 | 0.7897 | 0.79 | | 0.3798 | 19.76 | 5000 | 0.4528 | 0.8020 | 0.802 | | 0.3777 | 20.55 | 5200 | 0.4634 | 0.7960 | 0.796 | | 0.3715 | 21.34 | 5400 | 0.4812 | 0.7914 | 0.792 | | 0.371 | 22.13 | 5600 | 0.4803 | 0.7957 | 0.796 | | 0.3702 | 22.92 | 5800 | 0.4591 | 0.7940 | 0.794 | | 0.3635 | 23.72 | 6000 | 0.4693 | 0.7959 | 0.796 | | 0.3641 | 24.51 | 6200 | 0.4534 | 0.7888 | 0.789 | | 0.3669 | 25.3 | 6400 | 0.4633 | 0.7939 | 0.794 | | 0.3543 | 26.09 | 6600 | 0.4677 | 0.7990 | 0.799 | | 0.3575 | 26.88 | 6800 | 0.4748 | 0.7916 | 0.792 | | 0.3568 | 27.67 | 7000 | 0.4710 | 0.8010 | 0.801 | | 0.3552 | 28.46 | 7200 | 0.4785 | 0.8000 | 0.8 | | 0.3536 | 29.25 | 7400 | 0.4861 | 0.7907 | 0.791 | | 0.3554 | 30.04 | 7600 | 0.4939 | 0.7812 | 0.782 | | 0.3493 | 30.83 | 7800 | 0.4906 | 0.7945 | 0.795 | | 0.3475 | 31.62 | 8000 | 0.4792 | 0.7940 | 0.794 | | 0.3487 | 32.41 | 8200 | 0.4886 | 0.7935 | 0.794 | | 0.3432 | 33.2 | 8400 | 0.4888 | 0.7987 | 0.799 | | 0.3442 | 33.99 | 8600 | 0.4880 | 0.7968 | 0.797 | | 0.3425 | 34.78 | 8800 | 0.4801 | 0.7929 | 0.793 | | 0.339 | 35.57 | 9000 | 0.4823 | 0.7958 | 0.796 | | 0.3385 | 36.36 | 9200 | 0.4975 | 0.7934 | 0.794 | | 0.3431 | 37.15 | 9400 | 0.5009 | 0.7934 | 0.794 | | 0.3398 | 37.94 | 9600 | 0.4922 | 0.7976 | 0.798 | | 0.3427 | 38.74 | 9800 | 0.4941 | 0.7985 | 0.799 | | 0.3397 | 39.53 | 10000 | 0.4934 | 0.7966 | 0.797 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_tf_0-seqsight_32768_512_43M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_0-seqsight_32768_512_43M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_32768_512_43M", "region:us" ]
null
2024-05-03T14:41:15+00:00
null
null
{}
ArbaazBeg/deberta-base-0
null
[ "region:us" ]
null
2024-05-03T14:41:40+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_1-seqsight_32768_512_43M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_tf_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_1) dataset. It achieves the following results on the evaluation set: - Loss: 0.3292 - F1 Score: 0.8619 - Accuracy: 0.862 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5753 | 0.83 | 200 | 0.5324 | 0.7460 | 0.746 | | 0.5032 | 1.67 | 400 | 0.5240 | 0.7560 | 0.756 | | 0.4958 | 2.5 | 600 | 0.5291 | 0.7452 | 0.746 | | 0.4912 | 3.33 | 800 | 0.5346 | 0.7416 | 0.744 | | 0.4895 | 4.17 | 1000 | 0.5222 | 0.7441 | 0.745 | | 0.4823 | 5.0 | 1200 | 0.5219 | 0.7545 | 0.755 | | 0.4829 | 5.83 | 1400 | 0.5129 | 0.7566 | 0.757 | | 0.4811 | 6.67 | 1600 | 0.5231 | 0.7508 | 0.752 | | 0.4796 | 7.5 | 1800 | 0.5094 | 0.7617 | 0.762 | | 0.4742 | 8.33 | 2000 | 0.5179 | 0.7529 | 0.754 | | 0.4782 | 9.17 | 2200 | 0.5183 | 0.7549 | 0.756 | | 0.4743 | 10.0 | 2400 | 0.5211 | 0.7608 | 0.762 | | 0.4736 | 10.83 | 2600 | 0.5208 | 0.7544 | 0.756 | | 0.476 | 11.67 | 2800 | 0.5182 | 0.7502 | 0.752 | | 0.4653 | 12.5 | 3000 | 0.5132 | 0.7607 | 0.762 | | 0.4765 | 13.33 | 3200 | 0.5096 | 0.7540 | 0.755 | | 0.4698 | 14.17 | 3400 | 0.5039 | 0.7586 | 0.759 | | 0.4679 | 15.0 | 3600 | 0.5267 | 0.7440 | 0.747 | | 0.4667 | 15.83 | 3800 | 0.5089 | 0.7536 | 0.755 | | 0.469 | 16.67 | 4000 | 0.5170 | 0.7490 | 0.751 | | 0.4635 | 17.5 | 4200 | 0.5128 | 0.7552 | 0.757 | | 0.4626 | 18.33 | 4400 | 0.5197 | 0.7558 | 0.757 | | 0.4654 | 19.17 | 4600 | 0.5184 | 0.7466 | 0.749 | | 0.4663 | 20.0 | 4800 | 0.5068 | 0.7577 | 0.759 | | 0.4641 | 20.83 | 5000 | 0.5086 | 0.7640 | 0.765 | | 0.4619 | 21.67 | 5200 | 0.5033 | 0.7632 | 0.764 | | 0.4611 | 22.5 | 5400 | 0.5023 | 0.7566 | 0.757 | | 0.4627 | 23.33 | 5600 | 0.5086 | 0.7616 | 0.763 | | 0.4608 | 24.17 | 5800 | 0.5129 | 0.7565 | 0.758 | | 0.4619 | 25.0 | 6000 | 0.5022 | 0.7525 | 0.753 | | 0.4577 | 25.83 | 6200 | 0.5043 | 0.7562 | 0.757 | | 0.4599 | 26.67 | 6400 | 0.4997 | 0.7568 | 0.757 | | 0.4641 | 27.5 | 6600 | 0.5066 | 0.7571 | 0.758 | | 0.4596 | 28.33 | 6800 | 0.5053 | 0.7601 | 0.761 | | 0.4601 | 29.17 | 7000 | 0.5050 | 0.7621 | 0.763 | | 0.4597 | 30.0 | 7200 | 0.5037 | 0.7590 | 0.76 | | 0.458 | 30.83 | 7400 | 0.5124 | 0.7511 | 0.753 | | 0.4566 | 31.67 | 7600 | 0.5007 | 0.7603 | 0.761 | | 0.4605 | 32.5 | 7800 | 0.5002 | 0.7592 | 0.76 | | 0.4554 | 33.33 | 8000 | 0.5048 | 0.7568 | 0.758 | | 0.4596 | 34.17 | 8200 | 0.5029 | 0.7621 | 0.763 | | 0.4559 | 35.0 | 8400 | 0.5016 | 0.7591 | 0.76 | | 0.4562 | 35.83 | 8600 | 0.5031 | 0.7590 | 0.76 | | 0.4577 | 36.67 | 8800 | 0.5046 | 0.7597 | 0.761 | | 0.4529 | 37.5 | 9000 | 0.5138 | 0.7467 | 0.749 | | 0.4558 | 38.33 | 9200 | 0.5047 | 0.7619 | 0.763 | | 0.4549 | 39.17 | 9400 | 0.5045 | 0.7600 | 0.761 | | 0.4568 | 40.0 | 9600 | 0.5059 | 0.7628 | 0.764 | | 0.4546 | 40.83 | 9800 | 0.5018 | 0.7562 | 0.757 | | 0.4577 | 41.67 | 10000 | 0.5032 | 0.7600 | 0.761 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_tf_1-seqsight_32768_512_43M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_1-seqsight_32768_512_43M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_32768_512_43M", "region:us" ]
null
2024-05-03T14:41:45+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_1-seqsight_32768_512_43M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_tf_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_1) dataset. It achieves the following results on the evaluation set: - Loss: 0.3159 - F1 Score: 0.8648 - Accuracy: 0.865 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5511 | 0.83 | 200 | 0.5232 | 0.7560 | 0.756 | | 0.4924 | 1.67 | 400 | 0.5161 | 0.758 | 0.758 | | 0.4859 | 2.5 | 600 | 0.5182 | 0.7533 | 0.754 | | 0.4813 | 3.33 | 800 | 0.5122 | 0.7458 | 0.747 | | 0.4788 | 4.17 | 1000 | 0.5201 | 0.7502 | 0.752 | | 0.4714 | 5.0 | 1200 | 0.5120 | 0.7552 | 0.756 | | 0.469 | 5.83 | 1400 | 0.5057 | 0.7554 | 0.756 | | 0.4678 | 6.67 | 1600 | 0.5125 | 0.7494 | 0.751 | | 0.4659 | 7.5 | 1800 | 0.4952 | 0.7660 | 0.766 | | 0.459 | 8.33 | 2000 | 0.5023 | 0.7564 | 0.757 | | 0.4632 | 9.17 | 2200 | 0.4981 | 0.7615 | 0.762 | | 0.4572 | 10.0 | 2400 | 0.5176 | 0.7485 | 0.751 | | 0.456 | 10.83 | 2600 | 0.5159 | 0.7429 | 0.746 | | 0.457 | 11.67 | 2800 | 0.5063 | 0.7481 | 0.75 | | 0.4461 | 12.5 | 3000 | 0.4999 | 0.7557 | 0.757 | | 0.456 | 13.33 | 3200 | 0.5034 | 0.7363 | 0.739 | | 0.4491 | 14.17 | 3400 | 0.4857 | 0.7588 | 0.759 | | 0.4467 | 15.0 | 3600 | 0.5032 | 0.7476 | 0.75 | | 0.4443 | 15.83 | 3800 | 0.4992 | 0.7478 | 0.75 | | 0.446 | 16.67 | 4000 | 0.5010 | 0.7455 | 0.748 | | 0.4396 | 17.5 | 4200 | 0.5046 | 0.7456 | 0.748 | | 0.4384 | 18.33 | 4400 | 0.5007 | 0.7535 | 0.755 | | 0.4406 | 19.17 | 4600 | 0.5021 | 0.7483 | 0.75 | | 0.4418 | 20.0 | 4800 | 0.4931 | 0.7513 | 0.753 | | 0.4385 | 20.83 | 5000 | 0.4958 | 0.7564 | 0.758 | | 0.436 | 21.67 | 5200 | 0.4934 | 0.7572 | 0.759 | | 0.432 | 22.5 | 5400 | 0.4955 | 0.7572 | 0.758 | | 0.4346 | 23.33 | 5600 | 0.5005 | 0.7431 | 0.746 | | 0.433 | 24.17 | 5800 | 0.5042 | 0.7455 | 0.749 | | 0.433 | 25.0 | 6000 | 0.4909 | 0.7583 | 0.759 | | 0.4292 | 25.83 | 6200 | 0.4892 | 0.7583 | 0.759 | | 0.4289 | 26.67 | 6400 | 0.4848 | 0.7769 | 0.777 | | 0.4325 | 27.5 | 6600 | 0.4913 | 0.7591 | 0.76 | | 0.4285 | 28.33 | 6800 | 0.4898 | 0.7613 | 0.762 | | 0.4287 | 29.17 | 7000 | 0.4929 | 0.7599 | 0.761 | | 0.4281 | 30.0 | 7200 | 0.4861 | 0.7592 | 0.76 | | 0.4261 | 30.83 | 7400 | 0.4980 | 0.7487 | 0.751 | | 0.424 | 31.67 | 7600 | 0.4926 | 0.7612 | 0.762 | | 0.4284 | 32.5 | 7800 | 0.4902 | 0.7550 | 0.756 | | 0.4256 | 33.33 | 8000 | 0.4937 | 0.7558 | 0.757 | | 0.4261 | 34.17 | 8200 | 0.4939 | 0.7629 | 0.764 | | 0.422 | 35.0 | 8400 | 0.4920 | 0.7559 | 0.757 | | 0.4241 | 35.83 | 8600 | 0.4917 | 0.7580 | 0.759 | | 0.4226 | 36.67 | 8800 | 0.4958 | 0.7505 | 0.752 | | 0.4193 | 37.5 | 9000 | 0.5064 | 0.7410 | 0.744 | | 0.4206 | 38.33 | 9200 | 0.4976 | 0.7538 | 0.755 | | 0.4194 | 39.17 | 9400 | 0.4972 | 0.7547 | 0.756 | | 0.4216 | 40.0 | 9600 | 0.4976 | 0.7514 | 0.753 | | 0.42 | 40.83 | 9800 | 0.4927 | 0.7592 | 0.76 | | 0.4225 | 41.67 | 10000 | 0.4951 | 0.7559 | 0.757 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_tf_1-seqsight_32768_512_43M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_1-seqsight_32768_512_43M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_32768_512_43M", "region:us" ]
null
2024-05-03T14:42:04+00:00