modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-05 12:28:32
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
468 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-05 12:27:45
card
stringlengths
11
1.01M
chienweichang/Breeze-7B-Instruct-64k-v0_1-TaiwanChat-lora
chienweichang
2024-02-16T05:57:47Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-16T05:57:10Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sanjay782/test_qg
sanjay782
2024-02-16T05:49:46Z
0
0
peft
[ "peft", "arxiv:1910.09700", "base_model:NousResearch/Llama-2-7b-hf", "base_model:adapter:NousResearch/Llama-2-7b-hf", "region:us" ]
null
2024-02-16T05:43:21Z
--- library_name: peft base_model: NousResearch/Llama-2-7b-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.2.dev0
LarryAIDraw/satsuki
LarryAIDraw
2024-02-16T05:40:15Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-02-16T05:33:18Z
--- license: creativeml-openrail-m --- https://civitai.com/models/55245/satsukiblue-archive-or-goofy-ai
codescv123/ppo-LunarLander-v2
codescv123
2024-02-16T05:36:21Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-02-16T05:36:02Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 255.91 +/- 18.35 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Evan-Lin/positive-chosen-llama-chat-without-none
Evan-Lin
2024-02-16T05:19:23Z
1
0
peft
[ "peft", "safetensors", "trl", "dpo", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:adapter:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
2024-01-29T10:25:17Z
--- library_name: peft tags: - trl - dpo - generated_from_trainer base_model: meta-llama/Llama-2-7b-chat-hf model-index: - name: dpo-llama-chat-without-none results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dpo-llama-chat-without-none This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.9481 - Rewards/chosen: 4.6795 - Rewards/rejected: 2.8189 - Rewards/accuracies: 0.8547 - Rewards/margins: 1.8606 - Logps/rejected: -60.8495 - Logps/chosen: -50.0326 - Logits/rejected: -0.2216 - Logits/chosen: -0.2323 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - total_eval_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 50 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 6.3 | 0.24 | 100 | 6.1290 | 3.4767 | 3.2110 | 0.5920 | 0.2657 | -56.9286 | -62.0606 | -0.2723 | -0.2654 | | 5.5843 | 0.48 | 200 | 5.8936 | 3.6904 | 3.2305 | 0.6520 | 0.4599 | -56.7330 | -59.9230 | 0.2517 | 0.2475 | | 5.757 | 0.72 | 300 | 5.6694 | 3.9164 | 3.1893 | 0.7253 | 0.7271 | -57.1450 | -57.6631 | 0.3505 | 0.3418 | | 5.5385 | 0.96 | 400 | 5.4629 | 4.1466 | 3.1351 | 0.7600 | 1.0115 | -57.6871 | -55.3611 | 0.2059 | 0.1970 | | 5.2301 | 1.2 | 500 | 5.2891 | 4.3324 | 3.0305 | 0.7880 | 1.3020 | -58.7338 | -53.5027 | 0.1063 | 0.0968 | | 5.0115 | 1.44 | 600 | 5.1601 | 4.4582 | 2.9458 | 0.8213 | 1.5124 | -59.5800 | -52.2452 | -0.1082 | -0.1154 | | 4.9893 | 1.68 | 700 | 5.0431 | 4.5787 | 2.9142 | 0.8413 | 1.6645 | -59.8968 | -51.0404 | -0.1716 | -0.1829 | | 5.0292 | 1.92 | 800 | 4.9770 | 4.6501 | 2.8827 | 0.8427 | 1.7673 | -60.2111 | -50.3266 | -0.1929 | -0.2042 | | 4.331 | 2.16 | 900 | 4.9577 | 4.6724 | 2.8191 | 0.8480 | 1.8534 | -60.8478 | -50.1027 | -0.2005 | -0.2121 | | 4.5481 | 2.4 | 1000 | 4.9481 | 4.6795 | 2.8189 | 0.8547 | 1.8606 | -60.8495 | -50.0326 | -0.2216 | -0.2323 | ### Framework versions - PEFT 0.8.2 - Transformers 4.36.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
thrunlab/Mistral_Sparse_refined_web_90p_2024-02-15
thrunlab
2024-02-16T05:16:51Z
3
0
transformers
[ "transformers", "safetensors", "sparse_mistral", "text-generation", "generated_from_trainer", "custom_code", "base_model:mistralai/Mistral-7B-v0.1", "base_model:finetune:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-generation
2024-02-16T04:13:03Z
--- license: apache-2.0 base_model: mistralai/Mistral-7B-v0.1 tags: - generated_from_trainer model-index: - name: Mistral_Sparse_refined_web_90p_2024-02-15 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Mistral_Sparse_refined_web_90p_2024-02-15 This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 7.5010 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 0 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - total_eval_batch_size: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
bala1524/Drug_Comb_Pre_Mistral
bala1524
2024-02-16T05:11:56Z
0
0
keras
[ "keras", "biology", "medical", "conversational", "en", "dataset:CohereForAI/aya_collection", "license:apache-2.0", "region:us" ]
text-generation
2024-02-15T06:40:53Z
--- license: apache-2.0 language: - en tags: - biology - medical pipeline_tag: conversational datasets: - CohereForAI/aya_collection metrics: - chrf library_name: keras ---
EENDA/distilbert-finetuned-squadv2
EENDA
2024-02-16T05:10:45Z
92
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "question-answering", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2024-02-16T02:37:55Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-finetuned-squadv2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-finetuned-squadv2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.2
FINNUMBER/Yi-Ko-6B-Finch-TQA-full
FINNUMBER
2024-02-16T04:54:36Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-16T04:17:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
neozhang2003/ppo-Huggy
neozhang2003
2024-02-16T04:52:42Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2024-02-16T04:52:29Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: neozhang2003/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
lvcalucioli/llamantino7b_question_answering_finetuining
lvcalucioli
2024-02-16T04:41:09Z
3
0
peft
[ "peft", "safetensors", "llama", "trl", "sft", "generated_from_trainer", "base_model:swap-uniba/LLaMAntino-2-7b-hf-ITA", "base_model:adapter:swap-uniba/LLaMAntino-2-7b-hf-ITA", "license:llama2", "4-bit", "bitsandbytes", "region:us" ]
null
2024-02-16T02:39:09Z
--- license: llama2 library_name: peft tags: - trl - sft - generated_from_trainer base_model: swap-uniba/LLaMAntino-2-7b-hf-ITA model-index: - name: llamantino7b_question_answering_finetuining results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llamantino7b_question_answering_finetuining This model is a fine-tuned version of [swap-uniba/LLaMAntino-2-7b-hf-ITA](https://huggingface.co/swap-uniba/LLaMAntino-2-7b-hf-ITA) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4340 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.4152 | 1.0 | 3 | 1.4624 | | 1.3209 | 2.0 | 6 | 1.4340 | ### Framework versions - PEFT 0.8.2 - Transformers 4.38.0.dev0 - Pytorch 2.0.1+cu117 - Datasets 2.16.1 - Tokenizers 0.15.2
neutronprawn/bloom-560m-ad
neutronprawn
2024-02-16T04:40:11Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-16T04:40:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Jarmac/lab1_finetuning
Jarmac
2024-02-16T04:35:55Z
118
0
transformers
[ "transformers", "safetensors", "marian", "text2text-generation", "generated_from_trainer", "dataset:kde4", "base_model:Helsinki-NLP/opus-mt-en-fr", "base_model:finetune:Helsinki-NLP/opus-mt-en-fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-15T22:46:07Z
--- license: apache-2.0 base_model: Helsinki-NLP/opus-mt-en-fr tags: - generated_from_trainer datasets: - kde4 model-index: - name: lab1_finetuning results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
sayakpaul/pixel_peft_model-new
sayakpaul
2024-02-16T04:31:03Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-16T04:30:48Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sayakpaul/toy_peft_model-new
sayakpaul
2024-02-16T04:30:41Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
null
2024-02-12T06:03:40Z
--- library_name: peft base_model: stabilityai/stable-diffusion-xl-base-1.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.8.2
MrezaPRZ/sql-encoder-bert-large
MrezaPRZ
2024-02-16T04:30:10Z
91
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-16T04:29:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
NLUHOPOE/test-case-0
NLUHOPOE
2024-02-16T04:23:03Z
52
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "en", "dataset:Open-Orca/SlimOrca", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-16T01:07:05Z
--- license: apache-2.0 datasets: - Open-Orca/SlimOrca language: - en --- # Model Details * Model Description: This model is test for data ordering. * Developed by: Juhwan Lee * Model Type: Large Language Model # Model Architecture This model is based on Mistral-7B-v0.1. We fine-tuning this model for data ordering task. Mistral-7B-v0.1 is a transformer model, with the following architecture choices: * Grouped-Query Attention * Sliding-Window Attention * Byte-fallback BPE tokenizer # Dataset We random sample Open-Orca dataset. (We finetune the 100,000 dataset) # Guthub https://github.com/trailerAI # License Apache License 2.0
FINNUMBER/Yi-Ko-6B-Finch-NQA-COM-full
FINNUMBER
2024-02-16T04:17:51Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-16T03:41:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kahala/kahlagahan
kahala
2024-02-16T04:07:20Z
0
0
null
[ "region:us" ]
null
2024-02-16T04:06:45Z
<p><strong>Kahalagahan ng Mga Anyong Tubig: Pag-unawa at Pagpapahalaga sa Kalikasan</strong></p> <p>Ang Pilipinas ay mayaman sa likas na yaman, kabilang na rito ang iba&#39;t ibang anyong tubig. Ang mga ito ay hindi lamang nagbibigay ng kagandahan sa ating kapaligiran kundi nagbibigay din ng mahahalagang serbisyo sa ating ekosistema at pamumuhay. Ngunit, kadalasan, hindi natin lubos na nauunawaan ang kahalagahan ng <a href="https://kahalagahan.com/anyong-tubig"><strong>mga anyong tubig</strong></a> sa ating lipunan.</p> <p><strong>Ano nga ba ang mga anyong tubig?</strong></p> <p>Sa simpleng kahulugan, ang mga anyong tubig ay anumang lugar na mayroong nakakalat na tubig. Ito ay maaaring maging malaking karagatan, ilog, lawa, o pati na rin ang maliit na bukal sa mga bulubundukin. Bawat isa sa mga ito ay may sariling gampanin at pakinabang sa ating kalikasan at tao.</p> <p><strong>Ang Kahalagahan ng Mga Anyong Tubig sa Kalikasan</strong></p> <p>Ang mga anyong tubig ay naglalarawan sa kalikasan ng isang lugar at nagpapakita ng yaman ng biodiversity nito. Ang mga karagatan, halimbawa, ay tahanan ng iba&#39;t ibang uri ng mga isda, mga coral reef, at iba pang mga nilalang na bumubuo sa marine ecosystem. Ang mga ilog at lawa naman ay nagbibigay ng tirahan at pagkain sa maraming uri ng hayop at halaman.</p> <p><strong>Ang Anyong Tubig Bilang Bahagi ng Ating Pamumuhay</strong></p> <p>Sa loob ng maraming taon, ang mga anyong tubig ay nagiging mahalagang bahagi ng pamumuhay ng tao. Ang mga ilog, halimbawa, ay ginagamit para sa transportasyon, pagsasaka, at pag-aalaga ng mga industriya. Ang karagatan naman ay nagbibigay ng pagkain at kabuhayan sa mga nasa coastal communities at sa mga mangingisda.</p> <p><strong>Pagpapahalaga sa Kalikasan: Ang Susi sa Pangmatagalang Kaunlaran</strong></p> <p>Ngunit sa kabila ng kanilang <a href="https://kahalagahan.com"><strong>kahalagahan</strong></a>, madalas na nakakalimutan natin ang pangangalaga sa ating mga anyong tubig. Ang labis na pagtatapon ng basura, overfishing, at polusyon ay nagdudulot ng malaking pinsala sa mga ito. Kaya&#39;t mahalaga na tayo ay maging mapanuri at mapanagot sa pag-aalaga sa ating kapaligiran.</p>
Shijia/furina_seed42_eng_kin_amh_cross_0.0001
Shijia
2024-02-16T03:57:58Z
90
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:yihongLiu/furina", "base_model:finetune:yihongLiu/furina", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-16T03:56:32Z
--- base_model: yihongLiu/furina tags: - generated_from_trainer model-index: - name: furina_seed42_eng_kin_amh_cross_0.0001 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # furina_seed42_eng_kin_amh_cross_0.0001 This model is a fine-tuned version of [yihongLiu/furina](https://huggingface.co/yihongLiu/furina) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0269 - Spearman Corr: 0.7365 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Spearman Corr | |:-------------:|:-----:|:----:|:---------------:|:-------------:| | No log | 0.59 | 200 | 0.0342 | 0.5390 | | No log | 1.17 | 400 | 0.0309 | 0.4762 | | No log | 1.76 | 600 | 0.0333 | 0.6360 | | 0.0424 | 2.35 | 800 | 0.0407 | 0.6425 | | 0.0424 | 2.93 | 1000 | 0.0304 | 0.6871 | | 0.0424 | 3.52 | 1200 | 0.0316 | 0.6953 | | 0.0231 | 4.11 | 1400 | 0.0249 | 0.7122 | | 0.0231 | 4.69 | 1600 | 0.0405 | 0.7040 | | 0.0231 | 5.28 | 1800 | 0.0365 | 0.7094 | | 0.0231 | 5.87 | 2000 | 0.0327 | 0.7062 | | 0.0155 | 6.45 | 2200 | 0.0258 | 0.6996 | | 0.0155 | 7.04 | 2400 | 0.0324 | 0.7080 | | 0.0155 | 7.62 | 2600 | 0.0265 | 0.7257 | | 0.0095 | 8.21 | 2800 | 0.0297 | 0.7239 | | 0.0095 | 8.8 | 3000 | 0.0244 | 0.7276 | | 0.0095 | 9.38 | 3200 | 0.0282 | 0.7339 | | 0.0095 | 9.97 | 3400 | 0.0290 | 0.7252 | | 0.0064 | 10.56 | 3600 | 0.0242 | 0.7284 | | 0.0064 | 11.14 | 3800 | 0.0239 | 0.7332 | | 0.0064 | 11.73 | 4000 | 0.0248 | 0.7300 | | 0.0049 | 12.32 | 4200 | 0.0258 | 0.7320 | | 0.0049 | 12.9 | 4400 | 0.0246 | 0.7271 | | 0.0049 | 13.49 | 4600 | 0.0269 | 0.7373 | | 0.0038 | 14.08 | 4800 | 0.0285 | 0.7336 | | 0.0038 | 14.66 | 5000 | 0.0262 | 0.7316 | | 0.0038 | 15.25 | 5200 | 0.0279 | 0.7320 | | 0.0038 | 15.84 | 5400 | 0.0269 | 0.7365 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
haihuynh/rl_course_vizdoom_health_gathering_supreme
haihuynh
2024-02-16T03:51:57Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-02-16T03:51:51Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 11.41 +/- 6.15 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r haihuynh/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
srmishra/crossencoder-tynybert-km1
srmishra
2024-02-16T03:51:01Z
94
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:cross-encoder/stsb-TinyBERT-L-4", "base_model:finetune:cross-encoder/stsb-TinyBERT-L-4", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-16T03:50:42Z
--- license: apache-2.0 base_model: cross-encoder/stsb-TinyBERT-L-4 tags: - generated_from_trainer model-index: - name: crossencoder-tynybert-km1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # crossencoder-tynybert-km1 This model is a fine-tuned version of [cross-encoder/stsb-TinyBERT-L-4](https://huggingface.co/cross-encoder/stsb-TinyBERT-L-4) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0014 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.0054 | 1.0 | 125 | 0.0074 | | 0.005 | 2.0 | 250 | 0.0051 | | 0.0035 | 3.0 | 375 | 0.0008 | | 0.0015 | 4.0 | 500 | 0.0010 | | 0.0026 | 5.0 | 625 | 0.0031 | | 0.0011 | 6.0 | 750 | 0.0017 | | 0.0009 | 7.0 | 875 | 0.0017 | | 0.001 | 8.0 | 1000 | 0.0010 | | 0.0008 | 9.0 | 1125 | 0.0013 | | 0.0008 | 10.0 | 1250 | 0.0014 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0 - Datasets 2.14.6 - Tokenizers 0.15.1
FINNUMBER/Yi-Ko-6B-Finch-NQA-EXT-full
FINNUMBER
2024-02-16T03:41:09Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-16T03:04:34Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
supung/swin-tiny-patch4-window7-224-finetuned-eurosat
supung
2024-02-16T03:37:43Z
197
0
transformers
[ "transformers", "tensorboard", "safetensors", "swin", "image-classification", "generated_from_trainer", "base_model:microsoft/swin-tiny-patch4-window7-224", "base_model:finetune:microsoft/swin-tiny-patch4-window7-224", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-02-16T03:26:56Z
--- license: apache-2.0 base_model: microsoft/swin-tiny-patch4-window7-224 tags: - generated_from_trainer metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-finetuned-eurosat results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0830 - Accuracy: 0.9698 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3484 | 1.0 | 114 | 0.1715 | 0.9457 | | 0.2188 | 2.0 | 228 | 0.0976 | 0.9710 | | 0.2193 | 3.0 | 342 | 0.0830 | 0.9698 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
Basha738/outputs
Basha738
2024-02-16T03:36:59Z
8
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "region:us" ]
null
2024-02-08T06:34:14Z
--- library_name: peft tags: - trl - sft - generated_from_trainer base_model: LLama_weights/tmp model-index: - name: outputs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # outputs This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4939 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.0181 | 0.24 | 4 | 1.9684 | | 2.0616 | 0.47 | 8 | 1.8863 | | 1.8467 | 0.71 | 12 | 1.8116 | | 1.707 | 0.94 | 16 | 1.7309 | | 1.7886 | 1.18 | 20 | 1.6529 | | 1.6539 | 1.41 | 24 | 1.5884 | | 1.5149 | 1.65 | 28 | 1.5568 | | 1.4526 | 1.88 | 32 | 1.5390 | | 1.5335 | 2.12 | 36 | 1.5283 | | 1.5668 | 2.35 | 40 | 1.5211 | | 1.3914 | 2.59 | 44 | 1.5158 | | 1.5769 | 2.82 | 48 | 1.5113 | | 1.3794 | 3.06 | 52 | 1.5075 | | 1.5274 | 3.29 | 56 | 1.5043 | | 1.5247 | 3.53 | 60 | 1.5016 | | 1.4291 | 3.76 | 64 | 1.4993 | | 1.4233 | 4.0 | 68 | 1.4974 | | 1.4353 | 4.24 | 72 | 1.4960 | | 1.6016 | 4.47 | 76 | 1.4949 | | 1.4416 | 4.71 | 80 | 1.4942 | | 1.4654 | 4.94 | 84 | 1.4939 | ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.2.0+cu118 - Datasets 2.17.0 - Tokenizers 0.15.1
Basha738/llama2-13B-supervised-eos-ft-10-epochs-351
Basha738
2024-02-16T03:35:02Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-02-16T03:29:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
evanrsl/facial_emotion_model
evanrsl
2024-02-16T03:33:00Z
179
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-02-16T02:34:50Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: facial_emotion_model results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train[:5000] args: default metrics: - name: Accuracy type: accuracy value: 0.55625 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # facial_emotion_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.2427 - Accuracy: 0.5563 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 40 | 1.8904 | 0.3125 | | No log | 2.0 | 80 | 1.6093 | 0.4437 | | No log | 3.0 | 120 | 1.4846 | 0.4813 | | No log | 4.0 | 160 | 1.4352 | 0.5437 | | No log | 5.0 | 200 | 1.3533 | 0.5 | | No log | 6.0 | 240 | 1.3076 | 0.5188 | | No log | 7.0 | 280 | 1.2484 | 0.55 | | No log | 8.0 | 320 | 1.2073 | 0.5875 | | No log | 9.0 | 360 | 1.2465 | 0.5687 | | No log | 10.0 | 400 | 1.2770 | 0.5188 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
kohankhaki/opt-350m-sentiment-sst5-mapped-grouped-4
kohankhaki
2024-02-16T03:25:06Z
90
0
transformers
[ "transformers", "safetensors", "opt", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-02-16T03:24:23Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kohankhaki/opt-350m-sentiment-sst5-mapped-grouped-2
kohankhaki
2024-02-16T03:23:21Z
90
0
transformers
[ "transformers", "safetensors", "opt", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-02-16T03:22:30Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kohankhaki/opt-350m-sentiment-sst5-mapped-grouped-0
kohankhaki
2024-02-16T03:21:33Z
91
0
transformers
[ "transformers", "safetensors", "opt", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-02-16T03:20:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kohankhaki/opt-125m-sentiment-sst5-mapped-grouped-4
kohankhaki
2024-02-16T03:20:36Z
90
0
transformers
[ "transformers", "safetensors", "opt", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-02-16T03:20:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kohankhaki/opt-125m-sentiment-sst5-mapped-grouped-3
kohankhaki
2024-02-16T03:20:18Z
90
0
transformers
[ "transformers", "safetensors", "opt", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-02-16T03:20:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Shijia/furina_seed42_eng_amh_hau_cross_2e-05
Shijia
2024-02-16T03:19:15Z
100
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:yihongLiu/furina", "base_model:finetune:yihongLiu/furina", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-16T03:17:53Z
--- base_model: yihongLiu/furina tags: - generated_from_trainer model-index: - name: furina_seed42_eng_amh_hau_cross_2e-05 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # furina_seed42_eng_amh_hau_cross_2e-05 This model is a fine-tuned version of [yihongLiu/furina](https://huggingface.co/yihongLiu/furina) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0232 - Spearman Corr: 0.7701 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Spearman Corr | |:-------------:|:-----:|:----:|:---------------:|:-------------:| | No log | 0.52 | 200 | 0.0299 | 0.6210 | | No log | 1.04 | 400 | 0.0272 | 0.6985 | | No log | 1.55 | 600 | 0.0249 | 0.7315 | | 0.0481 | 2.07 | 800 | 0.0275 | 0.7413 | | 0.0481 | 2.59 | 1000 | 0.0223 | 0.7551 | | 0.0481 | 3.11 | 1200 | 0.0208 | 0.7640 | | 0.0481 | 3.63 | 1400 | 0.0212 | 0.7648 | | 0.0233 | 4.15 | 1600 | 0.0210 | 0.7682 | | 0.0233 | 4.66 | 1800 | 0.0231 | 0.7620 | | 0.0233 | 5.18 | 2000 | 0.0210 | 0.7816 | | 0.0233 | 5.7 | 2200 | 0.0220 | 0.7761 | | 0.0167 | 6.22 | 2400 | 0.0209 | 0.7644 | | 0.0167 | 6.74 | 2600 | 0.0211 | 0.7677 | | 0.0167 | 7.25 | 2800 | 0.0232 | 0.7701 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
kohankhaki/roberta-large-sentiment-sst5-mapped-grouped-4
kohankhaki
2024-02-16T03:19:07Z
92
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-16T03:18:15Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kohankhaki/roberta-large-sentiment-sst5-mapped-grouped-3
kohankhaki
2024-02-16T03:18:12Z
91
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-16T03:17:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Cesar2109/mi-super-modelo
Cesar2109
2024-02-16T03:17:04Z
91
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-16T02:59:59Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer metrics: - accuracy model-index: - name: mi-super-modelo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mi-super-modelo This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.5841 - Accuracy: 0.275 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.5886 | 0.5 | 5 | 1.5863 | 0.325 | | 1.6271 | 1.0 | 10 | 1.5841 | 0.275 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
kohankhaki/roberta-large-sentiment-sst5-mapped-grouped-1
kohankhaki
2024-02-16T03:16:15Z
93
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-16T03:15:21Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kohankhaki/roberta-large-sentiment-sst5-mapped-grouped-0
kohankhaki
2024-02-16T03:15:14Z
93
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-16T03:14:17Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kohankhaki/roberta-base-sentiment-sst5-mapped-grouped-1
kohankhaki
2024-02-16T03:13:21Z
92
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-16T03:13:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kohankhaki/roberta-base-sentiment-sst5-mapped-grouped-0
kohankhaki
2024-02-16T03:13:04Z
92
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-16T03:12:35Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Shijia/furina_seed42_eng_amh_hau_cross_0.0001
Shijia
2024-02-16T03:07:08Z
101
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:yihongLiu/furina", "base_model:finetune:yihongLiu/furina", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-16T03:05:54Z
--- base_model: yihongLiu/furina tags: - generated_from_trainer model-index: - name: furina_seed42_eng_amh_hau_cross_0.0001 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # furina_seed42_eng_amh_hau_cross_0.0001 This model is a fine-tuned version of [yihongLiu/furina](https://huggingface.co/yihongLiu/furina) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0312 - Spearman Corr: 0.7298 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Spearman Corr | |:-------------:|:-----:|:----:|:---------------:|:-------------:| | No log | 0.52 | 200 | 0.0443 | 0.1783 | | No log | 1.04 | 400 | 0.0333 | 0.5121 | | No log | 1.55 | 600 | 0.0424 | 0.5339 | | 0.0522 | 2.07 | 800 | 0.0398 | 0.5674 | | 0.0522 | 2.59 | 1000 | 0.0328 | 0.6002 | | 0.0522 | 3.11 | 1200 | 0.0313 | 0.6285 | | 0.0522 | 3.63 | 1400 | 0.0292 | 0.6480 | | 0.0361 | 4.15 | 1600 | 0.0297 | 0.6471 | | 0.0361 | 4.66 | 1800 | 0.0298 | 0.6724 | | 0.0361 | 5.18 | 2000 | 0.0308 | 0.7280 | | 0.0361 | 5.7 | 2200 | 0.0262 | 0.7299 | | 0.0258 | 6.22 | 2400 | 0.0255 | 0.7406 | | 0.0258 | 6.74 | 2600 | 0.0284 | 0.7288 | | 0.0258 | 7.25 | 2800 | 0.0295 | 0.7337 | | 0.0258 | 7.77 | 3000 | 0.0300 | 0.7393 | | 0.0164 | 8.29 | 3200 | 0.0271 | 0.7451 | | 0.0164 | 8.81 | 3400 | 0.0319 | 0.7359 | | 0.0164 | 9.33 | 3600 | 0.0261 | 0.7314 | | 0.0164 | 9.84 | 3800 | 0.0290 | 0.7265 | | 0.0105 | 10.36 | 4000 | 0.0312 | 0.7298 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
platero/ppo-Huggy
platero
2024-02-16T02:56:25Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2024-02-16T02:56:20Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: platero/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Shijia/furina_seed42_eng_kin_hau_cross_2e-05
Shijia
2024-02-16T02:35:08Z
90
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:yihongLiu/furina", "base_model:finetune:yihongLiu/furina", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-16T02:33:39Z
--- base_model: yihongLiu/furina tags: - generated_from_trainer model-index: - name: furina_seed42_eng_kin_hau_cross_2e-05 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # furina_seed42_eng_kin_hau_cross_2e-05 This model is a fine-tuned version of [yihongLiu/furina](https://huggingface.co/yihongLiu/furina) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0268 - Spearman Corr: 0.7372 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Spearman Corr | |:-------------:|:-----:|:----:|:---------------:|:-------------:| | No log | 0.53 | 200 | 0.0336 | 0.6022 | | No log | 1.06 | 400 | 0.0303 | 0.6548 | | No log | 1.6 | 600 | 0.0329 | 0.6851 | | 0.0491 | 2.13 | 800 | 0.0288 | 0.7186 | | 0.0491 | 2.66 | 1000 | 0.0258 | 0.7170 | | 0.0491 | 3.19 | 1200 | 0.0272 | 0.7286 | | 0.0491 | 3.72 | 1400 | 0.0285 | 0.7289 | | 0.0229 | 4.26 | 1600 | 0.0264 | 0.7193 | | 0.0229 | 4.79 | 1800 | 0.0303 | 0.7334 | | 0.0229 | 5.32 | 2000 | 0.0257 | 0.7393 | | 0.0229 | 5.85 | 2200 | 0.0260 | 0.7466 | | 0.0159 | 6.38 | 2400 | 0.0251 | 0.7402 | | 0.0159 | 6.91 | 2600 | 0.0256 | 0.7396 | | 0.0159 | 7.45 | 2800 | 0.0266 | 0.7453 | | 0.0159 | 7.98 | 3000 | 0.0268 | 0.7395 | | 0.0114 | 8.51 | 3200 | 0.0266 | 0.7433 | | 0.0114 | 9.04 | 3400 | 0.0261 | 0.7459 | | 0.0114 | 9.57 | 3600 | 0.0260 | 0.7410 | | 0.0087 | 10.11 | 3800 | 0.0274 | 0.7428 | | 0.0087 | 10.64 | 4000 | 0.0268 | 0.7372 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
lvcalucioli/results
lvcalucioli
2024-02-16T02:30:13Z
1
0
peft
[ "peft", "tensorboard", "safetensors", "t5", "trl", "sft", "generated_from_trainer", "base_model:swap-uniba/LLaMAntino-2-7b-hf-ITA", "base_model:adapter:swap-uniba/LLaMAntino-2-7b-hf-ITA", "license:llama2", "region:us" ]
null
2024-02-14T13:54:54Z
--- license: llama2 library_name: peft tags: - trl - sft - generated_from_trainer base_model: swap-uniba/LLaMAntino-2-7b-hf-ITA model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [swap-uniba/LLaMAntino-2-7b-hf-ITA](https://huggingface.co/swap-uniba/LLaMAntino-2-7b-hf-ITA) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4095 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.6551 | 1.0 | 90 | 1.4257 | | 1.1957 | 2.0 | 180 | 1.3750 | | 0.8459 | 3.0 | 270 | 1.4095 | ### Framework versions - PEFT 0.8.2 - Transformers 4.38.0.dev0 - Pytorch 2.0.1+cu117 - Datasets 2.16.1 - Tokenizers 0.15.2
rupeshs/antelopev2
rupeshs
2024-02-16T02:25:33Z
0
2
null
[ "onnx", "license:mit", "region:us" ]
null
2024-02-16T02:20:40Z
--- license: mit --- Note that these models are available for non-commercial research purposes only. For more details please check : https://pypi.org/project/insightface/0.6/
Shijia/furina_seed42_eng_kin_hau_cross_0.0001
Shijia
2024-02-16T02:18:19Z
90
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:yihongLiu/furina", "base_model:finetune:yihongLiu/furina", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-16T02:16:50Z
--- base_model: yihongLiu/furina tags: - generated_from_trainer model-index: - name: furina_seed42_eng_kin_hau_cross_0.0001 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # furina_seed42_eng_kin_hau_cross_0.0001 This model is a fine-tuned version of [yihongLiu/furina](https://huggingface.co/yihongLiu/furina) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0490 - Spearman Corr: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Spearman Corr | |:-------------:|:-----:|:----:|:---------------:|:-------------:| | No log | 0.53 | 200 | 0.0493 | 0.0550 | | No log | 1.06 | 400 | 0.0495 | 0.0857 | | No log | 1.6 | 600 | 0.0491 | -0.0146 | | 0.0593 | 2.13 | 800 | 0.0491 | 0.0012 | | 0.0593 | 2.66 | 1000 | 0.0496 | 0.0851 | | 0.0593 | 3.19 | 1200 | 0.0493 | 0.0390 | | 0.0593 | 3.72 | 1400 | 0.0490 | 0.1463 | | 0.055 | 4.26 | 1600 | 0.0491 | 0.0244 | | 0.055 | 4.79 | 1800 | 0.0491 | nan | | 0.055 | 5.32 | 2000 | 0.0491 | nan | | 0.055 | 5.85 | 2200 | 0.0494 | nan | | 0.0541 | 6.38 | 2400 | 0.0493 | nan | | 0.0541 | 6.91 | 2600 | 0.0491 | -0.0093 | | 0.0541 | 7.45 | 2800 | 0.0490 | nan | | 0.0541 | 7.98 | 3000 | 0.0490 | nan | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
furrutiav/math_bert_qa_extractor_cockatiel_2022_mixtral_v2_it_1597
furrutiav
2024-02-16T02:12:16Z
90
0
transformers
[ "transformers", "safetensors", "bert", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
feature-extraction
2024-02-16T02:10:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PipableAI/pip-SQL-1B
PipableAI
2024-02-16T02:09:58Z
54
8
transformers
[ "transformers", "safetensors", "llama", "text-generation", "code", "sql", "text2sql", "instruction_tuned", "jax", "pytorch", "1b", "expert", "en", "dataset:PipableAI/spider-bird", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-03T07:29:13Z
--- license: mit language: - en metrics: - accuracy pipeline_tag: text-generation widget: - text: "<schema>CREATE TABLE radio(age VARCHAR, radio_id VARCHAR, frequency VARCHAR, wavelength VARCHAR); CREATE TABLE radio_faults(radio_id VARCHAR, fault_description VARCHAR)</schema><question>Get the radio id and defect descriptions of radios that have wavelength greater than 30 ?</question><sql>" example_title: "example1" - text: "<schema>CREATE TABLE system(JobID: String,GID: String, UID: String, Start:Time(yyyy/mm/dd), End: Time,ElapsedRaw: Time, CPUTimeRAW: Time,NCPUS: Number,NNodes: Number, NodeList: List, State:String, Timelimit: Time);</schema><question>Get UID and job id for Jobs that started on Jan 20 , 2023</question><sql>" example_title: "example2" - text: "<schema>CREATE TABLE department (Department_ID number, Name text, Creation text, Ranking number, Budget_in_Billions number, Num_Employees number) which has Department_ID as primary key abd CREATE TABLE head (head_ID number, name text, born_state text, age number) which has head_ID as primary key and CREATE TABLE management (department_ID number, head_ID number, temporary_acting text) which has department_ID as primary key</schema><question>" example_title: "example3" tags: - code - sql - text2sql - instruction_tuned - jax - pytorch - 1b - expert datasets: - PipableAI/spider-bird --- # Pipable’s pipSQL Please refer to https://huggingface.co/PipableAI/pipSQL-1.3b for our state of the art model, that gives better performance than chatgpt and claude on sql tasks on a lot of benchmarks. Pipable’s pipSQL is a model distilled from llama 1b to generate sql queries given prompt and schema. We used a unique pipeline which involved the model working on two objectives alternatively ---- 1. Maximizing the log prob of all tokens in the sequence (including the prompt tokens) 2. Minimizng the difference between the true value and the predicted maximum value of the output tokens i.e generated tokens for the sql query slice of the entire sequence. ## License The model's new weights along with all other assets involved with it are open sourced under mit license. ## How to Use ```python text = """<schema>{schema}</schema> <question>{question}</question> <sql>""" ``` pytorch ```python from transformers import AutoModelForCasualLM, AutoTokenizer device = "cuda" model = AutoModelForCausalLM.from_pretrained("PipableAI/pipSQL1b") tokenizer = AutoTokenizer.from_pretrained("PipableAI/pipSQL1b") inputs = tokenizer(text, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=200) print(tokenizer.decode(outputs[0], skip_special_tokens=True).split('<sql>')[1].split('</sql>')[0]) ``` flax ```python from transformers import FlaxAutoModelForCasualLM, AutoTokenizer model = FlaxAutoModelForCausalLM.from_pretrained("PipableAI/pipSQL1b" , from_pt=True) tokenizer = AutoTokenizer.from_pretrained("PipableAI/pipSQL1b") ``` ## The PipableAI team Avi Kothari, Pratham Gupta, Ritvik Aryan Kalra, Rohan Bhatial, Soham Acharya
nvidia/OpenMath-CodeLlama-34b-Python-hf
nvidia
2024-02-16T02:09:43Z
23
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "nvidia", "code", "math", "en", "dataset:nvidia/OpenMathInstruct-1", "arxiv:2402.10176", "base_model:codellama/CodeLlama-34b-Python-hf", "base_model:finetune:codellama/CodeLlama-34b-Python-hf", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-10T03:26:21Z
--- license: llama2 base_model: - codellama/CodeLlama-34b-Python-hf datasets: - nvidia/OpenMathInstruct-1 language: - en tags: - nvidia - code - math --- # OpenMath-CodeLlama-34b-Python-hf OpenMath models were designed to solve mathematical problems by integrating text-based reasoning with code blocks executed by Python interpreter. The models were trained on [OpenMathInstruct-1](https://huggingface.co/datasets/nvidia/OpenMathInstruct-1), a math instruction tuning dataset with 1.8M problem-solution pairs generated using permissively licensed [Mixtral-8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) model. <table border="1"> <tr> <td></td> <td colspan="2" style="text-align: center;">greedy</td> <td colspan="2" style="text-align: center;">majority@50</td> </tr> <tr> <td style="text-align: center;">model</td> <td style="text-align: center;">GSM8K</td> <td style="text-align: center;">MATH</td> <td style="text-align: center;">GMS8K</td> <td style="text-align: center;">MATH</td> </tr> <tr> <td style="text-align: right;">OpenMath-CodeLlama-7B (<a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-7b-Python">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-7b-Python-hf">HF</a>)</td> <td style="text-align: center;">75.9</td> <td style="text-align: center;">43.6</td> <td style="text-align: center;">84.8</td> <td style="text-align: center;">55.6</td> </tr> <tr> <td style="text-align: right;">OpenMath-Mistral-7B (<a href="https://huggingface.co/nvidia/OpenMath-Mistral-7B-v0.1">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-Mistral-7B-v0.1-hf">HF</a>)</td> <td style="text-align: center;">80.2</td> <td style="text-align: center;">44.5</td> <td style="text-align: center;">86.9</td> <td style="text-align: center;">57.2</td> </tr> <tr> <td style="text-align: right;">OpenMath-CodeLlama-13B (<a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-13b-Python">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-13b-Python-hf">HF</a>)</td> <td style="text-align: center;">78.8</td> <td style="text-align: center;">45.5</td> <td style="text-align: center;">86.8</td> <td style="text-align: center;">57.6</td> </tr> <tr> <td style="text-align: right;">OpenMath-CodeLlama-34B (<a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-34b-Python">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-34b-Python-hf">HF</a>)</td> <td style="text-align: center;">80.7</td> <td style="text-align: center;">48.3</td> <td style="text-align: center;">88.0</td> <td style="text-align: center;">60.2</td> </tr> <tr> <td style="text-align: right;">OpenMath-Llama2-70B (<a href="https://huggingface.co/nvidia/OpenMath-Llama-2-70b">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-Llama-2-70b-hf">HF</a>)</td> <td style="text-align: center;"><b>84.7</b></td> <td style="text-align: center;">46.3</td> <td style="text-align: center;">90.1</td> <td style="text-align: center;">58.3</td> </tr> <tr> <td style="text-align: right;">OpenMath-CodeLlama-70B (<a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-70b-Python">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-70b-Python-hf">HF</a>)</td> <td style="text-align: center;">84.6</td> <td style="text-align: center;"><b>50.7</b></td> <td style="text-align: center;"><b>90.8</b></td> <td style="text-align: center;"><b>60.4</b></td> </tr> </table> The pipeline we used to produce these models is fully open-sourced! - [Code](https://github.com/Kipok/NeMo-Skills) - [Models](https://huggingface.co/collections/nvidia/openmath-65c5619de2ba059be0775014) - [Dataset](https://huggingface.co/datasets/nvidia/OpenMathInstruct-1) See our [paper](https://arxiv.org/abs/2402.10176) for more details! # How to use the models? Try to [run inference with our models](https://github.com/Kipok/NeMo-Skills/blob/main/docs/inference.md) with just a few commands! # Reproducing our results We provide [all instructions](https://github.com/Kipok/NeMo-Skills/blob/main/docs/reproducing-results.md) to fully reproduce our results. # Improving other models To improve other models or to learn more about our code, read through the docs below. - [NeMo-Skills Pipeline](https://github.com/Kipok/NeMo-Skills) - [Generating synthetic data](https://github.com/Kipok/NeMo-Skills/blob/main/docs/synthetic-data-generation.md) - [Finetuning models](https://github.com/Kipok/NeMo-Skills/blob/main/docs/finetuning.md) - [Evaluating models](https://github.com/Kipok/NeMo-Skills/blob/main/docs/evaluation.md) In our pipeline we use [NVIDIA NeMo](https://www.nvidia.com/en-us/ai-data-science/generative-ai/nemo-framework/), an end-to-end, cloud-native framework to build, customize, and deploy generative AI models anywhere. It includes training and inferencing frameworks, guardrailing toolkits, data curation tools, and pretrained models, offering enterprises an easy, cost-effective, and fast way to adopt generative AI. # Citation If you find our work useful, please consider citing us! ```bibtex @article{toshniwal2024openmath, title = {OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset}, author = {Shubham Toshniwal and Ivan Moshkov and Sean Narenthiran and Daria Gitman and Fei Jia and Igor Gitman}, year = {2024}, journal = {arXiv preprint arXiv: Arxiv-2402.10176} } ``` # License The use of this model is governed by the [Llama 2 Community License Agreement](https://ai.meta.com/llama/license/)
nvidia/OpenMath-CodeLlama-34b-Python
nvidia
2024-02-16T02:09:36Z
0
3
nemo
[ "nemo", "nvidia", "code", "math", "en", "dataset:nvidia/OpenMathInstruct-1", "arxiv:2402.10176", "base_model:codellama/CodeLlama-34b-Python-hf", "base_model:finetune:codellama/CodeLlama-34b-Python-hf", "license:llama2", "region:us" ]
null
2024-02-10T03:26:02Z
--- license: llama2 base_model: - codellama/CodeLlama-34b-Python-hf datasets: - nvidia/OpenMathInstruct-1 language: - en library_name: nemo tags: - nvidia - code - math --- # OpenMath-CodeLlama-34b-Python OpenMath models were designed to solve mathematical problems by integrating text-based reasoning with code blocks executed by Python interpreter. The models were trained on [OpenMathInstruct-1](https://huggingface.co/datasets/nvidia/OpenMathInstruct-1), a math instruction tuning dataset with 1.8M problem-solution pairs generated using permissively licensed [Mixtral-8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) model. <table border="1"> <tr> <td></td> <td colspan="2" style="text-align: center;">greedy</td> <td colspan="2" style="text-align: center;">majority@50</td> </tr> <tr> <td style="text-align: center;">model</td> <td style="text-align: center;">GSM8K</td> <td style="text-align: center;">MATH</td> <td style="text-align: center;">GMS8K</td> <td style="text-align: center;">MATH</td> </tr> <tr> <td style="text-align: right;">OpenMath-CodeLlama-7B (<a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-7b-Python">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-7b-Python-hf">HF</a>)</td> <td style="text-align: center;">75.9</td> <td style="text-align: center;">43.6</td> <td style="text-align: center;">84.8</td> <td style="text-align: center;">55.6</td> </tr> <tr> <td style="text-align: right;">OpenMath-Mistral-7B (<a href="https://huggingface.co/nvidia/OpenMath-Mistral-7B-v0.1">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-Mistral-7B-v0.1-hf">HF</a>)</td> <td style="text-align: center;">80.2</td> <td style="text-align: center;">44.5</td> <td style="text-align: center;">86.9</td> <td style="text-align: center;">57.2</td> </tr> <tr> <td style="text-align: right;">OpenMath-CodeLlama-13B (<a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-13b-Python">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-13b-Python-hf">HF</a>)</td> <td style="text-align: center;">78.8</td> <td style="text-align: center;">45.5</td> <td style="text-align: center;">86.8</td> <td style="text-align: center;">57.6</td> </tr> <tr> <td style="text-align: right;">OpenMath-CodeLlama-34B (<a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-34b-Python">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-34b-Python-hf">HF</a>)</td> <td style="text-align: center;">80.7</td> <td style="text-align: center;">48.3</td> <td style="text-align: center;">88.0</td> <td style="text-align: center;">60.2</td> </tr> <tr> <td style="text-align: right;">OpenMath-Llama2-70B (<a href="https://huggingface.co/nvidia/OpenMath-Llama-2-70b">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-Llama-2-70b-hf">HF</a>)</td> <td style="text-align: center;"><b>84.7</b></td> <td style="text-align: center;">46.3</td> <td style="text-align: center;">90.1</td> <td style="text-align: center;">58.3</td> </tr> <tr> <td style="text-align: right;">OpenMath-CodeLlama-70B (<a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-70b-Python">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-70b-Python-hf">HF</a>)</td> <td style="text-align: center;">84.6</td> <td style="text-align: center;"><b>50.7</b></td> <td style="text-align: center;"><b>90.8</b></td> <td style="text-align: center;"><b>60.4</b></td> </tr> </table> The pipeline we used to produce these models is fully open-sourced! - [Code](https://github.com/Kipok/NeMo-Skills) - [Models](https://huggingface.co/collections/nvidia/openmath-65c5619de2ba059be0775014) - [Dataset](https://huggingface.co/datasets/nvidia/OpenMathInstruct-1) See our [paper](https://arxiv.org/abs/2402.10176) for more details! # How to use the models? Try to [run inference with our models](https://github.com/Kipok/NeMo-Skills/blob/main/docs/inference.md) with just a few commands! # Reproducing our results We provide [all instructions](https://github.com/Kipok/NeMo-Skills/blob/main/docs/reproducing-results.md) to fully reproduce our results. # Improving other models To improve other models or to learn more about our code, read through the docs below. - [NeMo-Skills Pipeline](https://github.com/Kipok/NeMo-Skills) - [Generating synthetic data](https://github.com/Kipok/NeMo-Skills/blob/main/docs/synthetic-data-generation.md) - [Finetuning models](https://github.com/Kipok/NeMo-Skills/blob/main/docs/finetuning.md) - [Evaluating models](https://github.com/Kipok/NeMo-Skills/blob/main/docs/evaluation.md) In our pipeline we use [NVIDIA NeMo](https://www.nvidia.com/en-us/ai-data-science/generative-ai/nemo-framework/), an end-to-end, cloud-native framework to build, customize, and deploy generative AI models anywhere. It includes training and inferencing frameworks, guardrailing toolkits, data curation tools, and pretrained models, offering enterprises an easy, cost-effective, and fast way to adopt generative AI. # Citation If you find our work useful, please consider citing us! ```bibtex @article{toshniwal2024openmath, title = {OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset}, author = {Shubham Toshniwal and Ivan Moshkov and Sean Narenthiran and Daria Gitman and Fei Jia and Igor Gitman}, year = {2024}, journal = {arXiv preprint arXiv: Arxiv-2402.10176} } ``` # License The use of this model is governed by the [Llama 2 Community License Agreement](https://ai.meta.com/llama/license/)
nvidia/OpenMath-CodeLlama-13b-Python-hf
nvidia
2024-02-16T02:09:28Z
60
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "nvidia", "code", "math", "en", "dataset:nvidia/OpenMathInstruct-1", "arxiv:2402.10176", "base_model:codellama/CodeLlama-13b-Python-hf", "base_model:finetune:codellama/CodeLlama-13b-Python-hf", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-10T00:19:24Z
--- license: llama2 base_model: - codellama/CodeLlama-13b-Python-hf datasets: - nvidia/OpenMathInstruct-1 language: - en tags: - nvidia - code - math --- # OpenMath-CodeLlama-13b-Python-hf OpenMath models were designed to solve mathematical problems by integrating text-based reasoning with code blocks executed by Python interpreter. The models were trained on [OpenMathInstruct-1](https://huggingface.co/datasets/nvidia/OpenMathInstruct-1), a math instruction tuning dataset with 1.8M problem-solution pairs generated using permissively licensed [Mixtral-8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) model. <table border="1"> <tr> <td></td> <td colspan="2" style="text-align: center;">greedy</td> <td colspan="2" style="text-align: center;">majority@50</td> </tr> <tr> <td style="text-align: center;">model</td> <td style="text-align: center;">GSM8K</td> <td style="text-align: center;">MATH</td> <td style="text-align: center;">GMS8K</td> <td style="text-align: center;">MATH</td> </tr> <tr> <td style="text-align: right;">OpenMath-CodeLlama-7B (<a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-7b-Python">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-7b-Python-hf">HF</a>)</td> <td style="text-align: center;">75.9</td> <td style="text-align: center;">43.6</td> <td style="text-align: center;">84.8</td> <td style="text-align: center;">55.6</td> </tr> <tr> <td style="text-align: right;">OpenMath-Mistral-7B (<a href="https://huggingface.co/nvidia/OpenMath-Mistral-7B-v0.1">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-Mistral-7B-v0.1-hf">HF</a>)</td> <td style="text-align: center;">80.2</td> <td style="text-align: center;">44.5</td> <td style="text-align: center;">86.9</td> <td style="text-align: center;">57.2</td> </tr> <tr> <td style="text-align: right;">OpenMath-CodeLlama-13B (<a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-13b-Python">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-13b-Python-hf">HF</a>)</td> <td style="text-align: center;">78.8</td> <td style="text-align: center;">45.5</td> <td style="text-align: center;">86.8</td> <td style="text-align: center;">57.6</td> </tr> <tr> <td style="text-align: right;">OpenMath-CodeLlama-34B (<a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-34b-Python">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-34b-Python-hf">HF</a>)</td> <td style="text-align: center;">80.7</td> <td style="text-align: center;">48.3</td> <td style="text-align: center;">88.0</td> <td style="text-align: center;">60.2</td> </tr> <tr> <td style="text-align: right;">OpenMath-Llama2-70B (<a href="https://huggingface.co/nvidia/OpenMath-Llama-2-70b">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-Llama-2-70b-hf">HF</a>)</td> <td style="text-align: center;"><b>84.7</b></td> <td style="text-align: center;">46.3</td> <td style="text-align: center;">90.1</td> <td style="text-align: center;">58.3</td> </tr> <tr> <td style="text-align: right;">OpenMath-CodeLlama-70B (<a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-70b-Python">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-70b-Python-hf">HF</a>)</td> <td style="text-align: center;">84.6</td> <td style="text-align: center;"><b>50.7</b></td> <td style="text-align: center;"><b>90.8</b></td> <td style="text-align: center;"><b>60.4</b></td> </tr> </table> The pipeline we used to produce these models is fully open-sourced! - [Code](https://github.com/Kipok/NeMo-Skills) - [Models](https://huggingface.co/collections/nvidia/openmath-65c5619de2ba059be0775014) - [Dataset](https://huggingface.co/datasets/nvidia/OpenMathInstruct-1) See our [paper](https://arxiv.org/abs/2402.10176) for more details! # How to use the models? Try to [run inference with our models](https://github.com/Kipok/NeMo-Skills/blob/main/docs/inference.md) with just a few commands! # Reproducing our results We provide [all instructions](https://github.com/Kipok/NeMo-Skills/blob/main/docs/reproducing-results.md) to fully reproduce our results. # Improving other models To improve other models or to learn more about our code, read through the docs below. - [NeMo-Skills Pipeline](https://github.com/Kipok/NeMo-Skills) - [Generating synthetic data](https://github.com/Kipok/NeMo-Skills/blob/main/docs/synthetic-data-generation.md) - [Finetuning models](https://github.com/Kipok/NeMo-Skills/blob/main/docs/finetuning.md) - [Evaluating models](https://github.com/Kipok/NeMo-Skills/blob/main/docs/evaluation.md) In our pipeline we use [NVIDIA NeMo](https://www.nvidia.com/en-us/ai-data-science/generative-ai/nemo-framework/), an end-to-end, cloud-native framework to build, customize, and deploy generative AI models anywhere. It includes training and inferencing frameworks, guardrailing toolkits, data curation tools, and pretrained models, offering enterprises an easy, cost-effective, and fast way to adopt generative AI. # Citation If you find our work useful, please consider citing us! ```bibtex @article{toshniwal2024openmath, title = {OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset}, author = {Shubham Toshniwal and Ivan Moshkov and Sean Narenthiran and Daria Gitman and Fei Jia and Igor Gitman}, year = {2024}, journal = {arXiv preprint arXiv: Arxiv-2402.10176} } ``` # License The use of this model is governed by the [Llama 2 Community License Agreement](https://ai.meta.com/llama/license/)
nvidia/OpenMath-CodeLlama-7b-Python
nvidia
2024-02-16T02:09:04Z
0
2
nemo
[ "nemo", "nvidia", "code", "math", "en", "dataset:nvidia/OpenMathInstruct-1", "arxiv:2402.10176", "base_model:codellama/CodeLlama-7b-Python-hf", "base_model:finetune:codellama/CodeLlama-7b-Python-hf", "license:llama2", "region:us" ]
null
2024-02-09T05:52:53Z
--- license: llama2 base_model: - codellama/CodeLlama-7b-Python-hf datasets: - nvidia/OpenMathInstruct-1 language: - en library_name: nemo tags: - nvidia - code - math --- # OpenMath-CodeLlama-7b-Python OpenMath models were designed to solve mathematical problems by integrating text-based reasoning with code blocks executed by Python interpreter. The models were trained on [OpenMathInstruct-1](https://huggingface.co/datasets/nvidia/OpenMathInstruct-1), a math instruction tuning dataset with 1.8M problem-solution pairs generated using permissively licensed [Mixtral-8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) model. <table border="1"> <tr> <td></td> <td colspan="2" style="text-align: center;">greedy</td> <td colspan="2" style="text-align: center;">majority@50</td> </tr> <tr> <td style="text-align: center;">model</td> <td style="text-align: center;">GSM8K</td> <td style="text-align: center;">MATH</td> <td style="text-align: center;">GMS8K</td> <td style="text-align: center;">MATH</td> </tr> <tr> <td style="text-align: right;">OpenMath-CodeLlama-7B (<a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-7b-Python">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-7b-Python-hf">HF</a>)</td> <td style="text-align: center;">75.9</td> <td style="text-align: center;">43.6</td> <td style="text-align: center;">84.8</td> <td style="text-align: center;">55.6</td> </tr> <tr> <td style="text-align: right;">OpenMath-Mistral-7B (<a href="https://huggingface.co/nvidia/OpenMath-Mistral-7B-v0.1">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-Mistral-7B-v0.1-hf">HF</a>)</td> <td style="text-align: center;">80.2</td> <td style="text-align: center;">44.5</td> <td style="text-align: center;">86.9</td> <td style="text-align: center;">57.2</td> </tr> <tr> <td style="text-align: right;">OpenMath-CodeLlama-13B (<a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-13b-Python">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-13b-Python-hf">HF</a>)</td> <td style="text-align: center;">78.8</td> <td style="text-align: center;">45.5</td> <td style="text-align: center;">86.8</td> <td style="text-align: center;">57.6</td> </tr> <tr> <td style="text-align: right;">OpenMath-CodeLlama-34B (<a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-34b-Python">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-34b-Python-hf">HF</a>)</td> <td style="text-align: center;">80.7</td> <td style="text-align: center;">48.3</td> <td style="text-align: center;">88.0</td> <td style="text-align: center;">60.2</td> </tr> <tr> <td style="text-align: right;">OpenMath-Llama2-70B (<a href="https://huggingface.co/nvidia/OpenMath-Llama-2-70b">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-Llama-2-70b-hf">HF</a>)</td> <td style="text-align: center;"><b>84.7</b></td> <td style="text-align: center;">46.3</td> <td style="text-align: center;">90.1</td> <td style="text-align: center;">58.3</td> </tr> <tr> <td style="text-align: right;">OpenMath-CodeLlama-70B (<a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-70b-Python">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-70b-Python-hf">HF</a>)</td> <td style="text-align: center;">84.6</td> <td style="text-align: center;"><b>50.7</b></td> <td style="text-align: center;"><b>90.8</b></td> <td style="text-align: center;"><b>60.4</b></td> </tr> </table> The pipeline we used to produce these models is fully open-sourced! - [Code](https://github.com/Kipok/NeMo-Skills) - [Models](https://huggingface.co/collections/nvidia/openmath-65c5619de2ba059be0775014) - [Dataset](https://huggingface.co/datasets/nvidia/OpenMathInstruct-1) See our [paper](https://arxiv.org/abs/2402.10176) for more details! # How to use the models? Try to [run inference with our models](https://github.com/Kipok/NeMo-Skills/blob/main/docs/inference.md) with just a few commands! # Reproducing our results We provide [all instructions](https://github.com/Kipok/NeMo-Skills/blob/main/docs/reproducing-results.md) to fully reproduce our results. # Improving other models To improve other models or to learn more about our code, read through the docs below. - [NeMo-Skills Pipeline](https://github.com/Kipok/NeMo-Skills) - [Generating synthetic data](https://github.com/Kipok/NeMo-Skills/blob/main/docs/synthetic-data-generation.md) - [Finetuning models](https://github.com/Kipok/NeMo-Skills/blob/main/docs/finetuning.md) - [Evaluating models](https://github.com/Kipok/NeMo-Skills/blob/main/docs/evaluation.md) In our pipeline we use [NVIDIA NeMo](https://www.nvidia.com/en-us/ai-data-science/generative-ai/nemo-framework/), an end-to-end, cloud-native framework to build, customize, and deploy generative AI models anywhere. It includes training and inferencing frameworks, guardrailing toolkits, data curation tools, and pretrained models, offering enterprises an easy, cost-effective, and fast way to adopt generative AI. # Citation If you find our work useful, please consider citing us! ```bibtex @article{toshniwal2024openmath, title = {OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset}, author = {Shubham Toshniwal and Ivan Moshkov and Sean Narenthiran and Daria Gitman and Fei Jia and Igor Gitman}, year = {2024}, journal = {arXiv preprint arXiv: Arxiv-2402.10176} } ``` # License The use of this model is governed by the [Llama 2 Community License Agreement](https://ai.meta.com/llama/license/)
nvidia/OpenMath-Mistral-7B-v0.1-hf
nvidia
2024-02-16T02:08:55Z
291
30
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "nvidia", "code", "math", "en", "dataset:nvidia/OpenMathInstruct-1", "arxiv:2402.10176", "base_model:mistralai/Mistral-7B-v0.1", "base_model:finetune:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-06T19:11:12Z
--- license: apache-2.0 base_model: - mistralai/Mistral-7B-v0.1 datasets: - nvidia/OpenMathInstruct-1 language: - en tags: - nvidia - code - math --- # OpenMath-Mistral-7B-v0.1-hf OpenMath models were designed to solve mathematical problems by integrating text-based reasoning with code blocks executed by Python interpreter. The models were trained on [OpenMathInstruct-1](https://huggingface.co/datasets/nvidia/OpenMathInstruct-1), a math instruction tuning dataset with 1.8M problem-solution pairs generated using permissively licensed [Mixtral-8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) model. <table border="1"> <tr> <td></td> <td colspan="2" style="text-align: center;">greedy</td> <td colspan="2" style="text-align: center;">majority@50</td> </tr> <tr> <td style="text-align: center;">model</td> <td style="text-align: center;">GSM8K</td> <td style="text-align: center;">MATH</td> <td style="text-align: center;">GMS8K</td> <td style="text-align: center;">MATH</td> </tr> <tr> <td style="text-align: right;">OpenMath-CodeLlama-7B (<a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-7b-Python">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-7b-Python-hf">HF</a>)</td> <td style="text-align: center;">75.9</td> <td style="text-align: center;">43.6</td> <td style="text-align: center;">84.8</td> <td style="text-align: center;">55.6</td> </tr> <tr> <td style="text-align: right;">OpenMath-Mistral-7B (<a href="https://huggingface.co/nvidia/OpenMath-Mistral-7B-v0.1">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-Mistral-7B-v0.1-hf">HF</a>)</td> <td style="text-align: center;">80.2</td> <td style="text-align: center;">44.5</td> <td style="text-align: center;">86.9</td> <td style="text-align: center;">57.2</td> </tr> <tr> <td style="text-align: right;">OpenMath-CodeLlama-13B (<a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-13b-Python">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-13b-Python-hf">HF</a>)</td> <td style="text-align: center;">78.8</td> <td style="text-align: center;">45.5</td> <td style="text-align: center;">86.8</td> <td style="text-align: center;">57.6</td> </tr> <tr> <td style="text-align: right;">OpenMath-CodeLlama-34B (<a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-34b-Python">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-34b-Python-hf">HF</a>)</td> <td style="text-align: center;">80.7</td> <td style="text-align: center;">48.3</td> <td style="text-align: center;">88.0</td> <td style="text-align: center;">60.2</td> </tr> <tr> <td style="text-align: right;">OpenMath-Llama2-70B (<a href="https://huggingface.co/nvidia/OpenMath-Llama-2-70b">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-Llama-2-70b-hf">HF</a>)</td> <td style="text-align: center;"><b>84.7</b></td> <td style="text-align: center;">46.3</td> <td style="text-align: center;">90.1</td> <td style="text-align: center;">58.3</td> </tr> <tr> <td style="text-align: right;">OpenMath-CodeLlama-70B (<a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-70b-Python">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-70b-Python-hf">HF</a>)</td> <td style="text-align: center;">84.6</td> <td style="text-align: center;"><b>50.7</b></td> <td style="text-align: center;"><b>90.8</b></td> <td style="text-align: center;"><b>60.4</b></td> </tr> </table> The pipeline we used to produce these models is fully open-sourced! - [Code](https://github.com/Kipok/NeMo-Skills) - [Models](https://huggingface.co/collections/nvidia/openmath-65c5619de2ba059be0775014) - [Dataset](https://huggingface.co/datasets/nvidia/OpenMathInstruct-1) See our [paper](https://arxiv.org/abs/2402.10176) for more details! # How to use the models? Try to [run inference with our models](https://github.com/Kipok/NeMo-Skills/blob/main/docs/inference.md) with just a few commands! # Reproducing our results We provide [all instructions](https://github.com/Kipok/NeMo-Skills/blob/main/docs/reproducing-results.md) to fully reproduce our results. # Improving other models To improve other models or to learn more about our code, read through the docs below. - [NeMo-Skills Pipeline](https://github.com/Kipok/NeMo-Skills) - [Generating synthetic data](https://github.com/Kipok/NeMo-Skills/blob/main/docs/synthetic-data-generation.md) - [Finetuning models](https://github.com/Kipok/NeMo-Skills/blob/main/docs/finetuning.md) - [Evaluating models](https://github.com/Kipok/NeMo-Skills/blob/main/docs/evaluation.md) In our pipeline we use [NVIDIA NeMo](https://www.nvidia.com/en-us/ai-data-science/generative-ai/nemo-framework/), an end-to-end, cloud-native framework to build, customize, and deploy generative AI models anywhere. It includes training and inferencing frameworks, guardrailing toolkits, data curation tools, and pretrained models, offering enterprises an easy, cost-effective, and fast way to adopt generative AI. # Citation If you find our work useful, please consider citing us! ```bibtex @article{toshniwal2024openmath, title = {OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset}, author = {Shubham Toshniwal and Ivan Moshkov and Sean Narenthiran and Daria Gitman and Fei Jia and Igor Gitman}, year = {2024}, journal = {arXiv preprint arXiv: Arxiv-2402.10176} } ```
nvidia/OpenMath-CodeLlama-70b-Python
nvidia
2024-02-16T02:07:39Z
0
5
nemo
[ "nemo", "nvidia", "code", "math", "en", "dataset:nvidia/OpenMathInstruct-1", "arxiv:2402.10176", "base_model:codellama/CodeLlama-70b-Python-hf", "base_model:finetune:codellama/CodeLlama-70b-Python-hf", "license:llama2", "region:us" ]
null
2024-02-10T23:14:43Z
--- license: llama2 base_model: - codellama/CodeLlama-70b-Python-hf datasets: - nvidia/OpenMathInstruct-1 language: - en library_name: nemo tags: - nvidia - code - math --- # OpenMath-CodeLlama-70b-Python OpenMath models were designed to solve mathematical problems by integrating text-based reasoning with code blocks executed by Python interpreter. The models were trained on [OpenMathInstruct-1](https://huggingface.co/datasets/nvidia/OpenMathInstruct-1), a math instruction tuning dataset with 1.8M problem-solution pairs generated using permissively licensed [Mixtral-8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) model. <table border="1"> <tr> <td></td> <td colspan="2" style="text-align: center;">greedy</td> <td colspan="2" style="text-align: center;">majority@50</td> </tr> <tr> <td style="text-align: center;">model</td> <td style="text-align: center;">GSM8K</td> <td style="text-align: center;">MATH</td> <td style="text-align: center;">GMS8K</td> <td style="text-align: center;">MATH</td> </tr> <tr> <td style="text-align: right;">OpenMath-CodeLlama-7B (<a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-7b-Python">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-7b-Python-hf">HF</a>)</td> <td style="text-align: center;">75.9</td> <td style="text-align: center;">43.6</td> <td style="text-align: center;">84.8</td> <td style="text-align: center;">55.6</td> </tr> <tr> <td style="text-align: right;">OpenMath-Mistral-7B (<a href="https://huggingface.co/nvidia/OpenMath-Mistral-7B-v0.1">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-Mistral-7B-v0.1-hf">HF</a>)</td> <td style="text-align: center;">80.2</td> <td style="text-align: center;">44.5</td> <td style="text-align: center;">86.9</td> <td style="text-align: center;">57.2</td> </tr> <tr> <td style="text-align: right;">OpenMath-CodeLlama-13B (<a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-13b-Python">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-13b-Python-hf">HF</a>)</td> <td style="text-align: center;">78.8</td> <td style="text-align: center;">45.5</td> <td style="text-align: center;">86.8</td> <td style="text-align: center;">57.6</td> </tr> <tr> <td style="text-align: right;">OpenMath-CodeLlama-34B (<a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-34b-Python">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-34b-Python-hf">HF</a>)</td> <td style="text-align: center;">80.7</td> <td style="text-align: center;">48.3</td> <td style="text-align: center;">88.0</td> <td style="text-align: center;">60.2</td> </tr> <tr> <td style="text-align: right;">OpenMath-Llama2-70B (<a href="https://huggingface.co/nvidia/OpenMath-Llama-2-70b">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-Llama-2-70b-hf">HF</a>)</td> <td style="text-align: center;"><b>84.7</b></td> <td style="text-align: center;">46.3</td> <td style="text-align: center;">90.1</td> <td style="text-align: center;">58.3</td> </tr> <tr> <td style="text-align: right;">OpenMath-CodeLlama-70B (<a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-70b-Python">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-70b-Python-hf">HF</a>)</td> <td style="text-align: center;">84.6</td> <td style="text-align: center;"><b>50.7</b></td> <td style="text-align: center;"><b>90.8</b></td> <td style="text-align: center;"><b>60.4</b></td> </tr> </table> The pipeline we used to produce these models is fully open-sourced! - [Code](https://github.com/Kipok/NeMo-Skills) - [Models](https://huggingface.co/collections/nvidia/openmath-65c5619de2ba059be0775014) - [Dataset](https://huggingface.co/datasets/nvidia/OpenMathInstruct-1) See our [paper](https://arxiv.org/abs/2402.10176) for more details! # How to use the models? Try to [run inference with our models](https://github.com/Kipok/NeMo-Skills/blob/main/docs/inference.md) with just a few commands! # Reproducing our results We provide [all instructions](https://github.com/Kipok/NeMo-Skills/blob/main/docs/reproducing-results.md) to fully reproduce our results. # Improving other models To improve other models or to learn more about our code, read through the docs below. - [NeMo-Skills Pipeline](https://github.com/Kipok/NeMo-Skills) - [Generating synthetic data](https://github.com/Kipok/NeMo-Skills/blob/main/docs/synthetic-data-generation.md) - [Finetuning models](https://github.com/Kipok/NeMo-Skills/blob/main/docs/finetuning.md) - [Evaluating models](https://github.com/Kipok/NeMo-Skills/blob/main/docs/evaluation.md) In our pipeline we use [NVIDIA NeMo](https://www.nvidia.com/en-us/ai-data-science/generative-ai/nemo-framework/), an end-to-end, cloud-native framework to build, customize, and deploy generative AI models anywhere. It includes training and inferencing frameworks, guardrailing toolkits, data curation tools, and pretrained models, offering enterprises an easy, cost-effective, and fast way to adopt generative AI. # Citation If you find our work useful, please consider citing us! ```bibtex @article{toshniwal2024openmath, title = {OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset}, author = {Shubham Toshniwal and Ivan Moshkov and Sean Narenthiran and Daria Gitman and Fei Jia and Igor Gitman}, year = {2024}, journal = {arXiv preprint arXiv: Arxiv-2402.10176} } ``` # License The use of this model is governed by the [Llama 2 Community License Agreement](https://ai.meta.com/llama/license/)
furrutiav/math_bert_qa_extractor_cockatiel_2022_nllf_mixtral_v2_it_1492
furrutiav
2024-02-16T02:07:27Z
90
0
transformers
[ "transformers", "safetensors", "bert", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
feature-extraction
2024-02-16T02:05:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nvidia/OpenMath-Llama-2-70b-hf
nvidia
2024-02-16T02:07:12Z
32
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "nvidia", "code", "math", "en", "dataset:nvidia/OpenMathInstruct-1", "arxiv:2402.10176", "base_model:meta-llama/Llama-2-70b-hf", "base_model:finetune:meta-llama/Llama-2-70b-hf", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-10T23:14:20Z
--- license: llama2 base_model: - meta-llama/Llama-2-70b-hf datasets: - nvidia/OpenMathInstruct-1 language: - en tags: - nvidia - code - math --- # OpenMath-Llama-2-70b-hf OpenMath models were designed to solve mathematical problems by integrating text-based reasoning with code blocks executed by Python interpreter. The models were trained on [OpenMathInstruct-1](https://huggingface.co/datasets/nvidia/OpenMathInstruct-1), a math instruction tuning dataset with 1.8M problem-solution pairs generated using permissively licensed [Mixtral-8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) model. <table border="1"> <tr> <td></td> <td colspan="2" style="text-align: center;">greedy</td> <td colspan="2" style="text-align: center;">majority@50</td> </tr> <tr> <td style="text-align: center;">model</td> <td style="text-align: center;">GSM8K</td> <td style="text-align: center;">MATH</td> <td style="text-align: center;">GMS8K</td> <td style="text-align: center;">MATH</td> </tr> <tr> <td style="text-align: right;">OpenMath-CodeLlama-7B (<a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-7b-Python">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-7b-Python-hf">HF</a>)</td> <td style="text-align: center;">75.9</td> <td style="text-align: center;">43.6</td> <td style="text-align: center;">84.8</td> <td style="text-align: center;">55.6</td> </tr> <tr> <td style="text-align: right;">OpenMath-Mistral-7B (<a href="https://huggingface.co/nvidia/OpenMath-Mistral-7B-v0.1">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-Mistral-7B-v0.1-hf">HF</a>)</td> <td style="text-align: center;">80.2</td> <td style="text-align: center;">44.5</td> <td style="text-align: center;">86.9</td> <td style="text-align: center;">57.2</td> </tr> <tr> <td style="text-align: right;">OpenMath-CodeLlama-13B (<a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-13b-Python">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-13b-Python-hf">HF</a>)</td> <td style="text-align: center;">78.8</td> <td style="text-align: center;">45.5</td> <td style="text-align: center;">86.8</td> <td style="text-align: center;">57.6</td> </tr> <tr> <td style="text-align: right;">OpenMath-CodeLlama-34B (<a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-34b-Python">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-34b-Python-hf">HF</a>)</td> <td style="text-align: center;">80.7</td> <td style="text-align: center;">48.3</td> <td style="text-align: center;">88.0</td> <td style="text-align: center;">60.2</td> </tr> <tr> <td style="text-align: right;">OpenMath-Llama2-70B (<a href="https://huggingface.co/nvidia/OpenMath-Llama-2-70b">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-Llama-2-70b-hf">HF</a>)</td> <td style="text-align: center;"><b>84.7</b></td> <td style="text-align: center;">46.3</td> <td style="text-align: center;">90.1</td> <td style="text-align: center;">58.3</td> </tr> <tr> <td style="text-align: right;">OpenMath-CodeLlama-70B (<a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-70b-Python">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-70b-Python-hf">HF</a>)</td> <td style="text-align: center;">84.6</td> <td style="text-align: center;"><b>50.7</b></td> <td style="text-align: center;"><b>90.8</b></td> <td style="text-align: center;"><b>60.4</b></td> </tr> </table> The pipeline we used to produce these models is fully open-sourced! - [Code](https://github.com/Kipok/NeMo-Skills) - [Models](https://huggingface.co/collections/nvidia/openmath-65c5619de2ba059be0775014) - [Dataset](https://huggingface.co/datasets/nvidia/OpenMathInstruct-1) See our [paper](https://arxiv.org/abs/2402.10176) for more details! # How to use the models? Try to [run inference with our models](https://github.com/Kipok/NeMo-Skills/blob/main/docs/inference.md) with just a few commands! # Reproducing our results We provide [all instructions](https://github.com/Kipok/NeMo-Skills/blob/main/docs/reproducing-results.md) to fully reproduce our results. # Improving other models To improve other models or to learn more about our code, read through the docs below. - [NeMo-Skills Pipeline](https://github.com/Kipok/NeMo-Skills) - [Generating synthetic data](https://github.com/Kipok/NeMo-Skills/blob/main/docs/synthetic-data-generation.md) - [Finetuning models](https://github.com/Kipok/NeMo-Skills/blob/main/docs/finetuning.md) - [Evaluating models](https://github.com/Kipok/NeMo-Skills/blob/main/docs/evaluation.md) In our pipeline we use [NVIDIA NeMo](https://www.nvidia.com/en-us/ai-data-science/generative-ai/nemo-framework/), an end-to-end, cloud-native framework to build, customize, and deploy generative AI models anywhere. It includes training and inferencing frameworks, guardrailing toolkits, data curation tools, and pretrained models, offering enterprises an easy, cost-effective, and fast way to adopt generative AI. # Citation If you find our work useful, please consider citing us! ```bibtex @article{toshniwal2024openmath, title = {OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset}, author = {Shubham Toshniwal and Ivan Moshkov and Sean Narenthiran and Daria Gitman and Fei Jia and Igor Gitman}, year = {2024}, journal = {arXiv preprint arXiv: Arxiv-2402.10176} } ``` # License The use of this model is governed by the [Llama 2 Community License Agreement](https://ai.meta.com/llama/license/)
Shijia/furina_seed42_eng_esp_hau_cross_5e-06
Shijia
2024-02-16T02:04:59Z
100
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:yihongLiu/furina", "base_model:finetune:yihongLiu/furina", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-16T02:03:19Z
--- base_model: yihongLiu/furina tags: - generated_from_trainer model-index: - name: furina_seed42_eng_esp_hau_cross_5e-06 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # furina_seed42_eng_esp_hau_cross_5e-06 This model is a fine-tuned version of [yihongLiu/furina](https://huggingface.co/yihongLiu/furina) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0260 - Spearman Corr: 0.7338 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 32 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Spearman Corr | |:-------------:|:-----:|:----:|:---------------:|:-------------:| | No log | 0.48 | 200 | 0.0504 | 0.1104 | | No log | 0.97 | 400 | 0.0316 | 0.6024 | | No log | 1.45 | 600 | 0.0338 | 0.6583 | | No log | 1.94 | 800 | 0.0294 | 0.6741 | | 0.0692 | 2.42 | 1000 | 0.0294 | 0.6849 | | 0.0692 | 2.91 | 1200 | 0.0312 | 0.6991 | | 0.0692 | 3.39 | 1400 | 0.0312 | 0.7002 | | 0.0692 | 3.88 | 1600 | 0.0231 | 0.7199 | | 0.0291 | 4.36 | 1800 | 0.0243 | 0.7215 | | 0.0291 | 4.85 | 2000 | 0.0286 | 0.7169 | | 0.0291 | 5.33 | 2200 | 0.0274 | 0.7279 | | 0.0291 | 5.82 | 2400 | 0.0248 | 0.7313 | | 0.0248 | 6.3 | 2600 | 0.0266 | 0.7305 | | 0.0248 | 6.79 | 2800 | 0.0238 | 0.7325 | | 0.0248 | 7.27 | 3000 | 0.0262 | 0.7311 | | 0.0248 | 7.76 | 3200 | 0.0260 | 0.7338 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
DrNicefellow/WorryFree_GeneralQA_Chat_Mixtral-8x7B-v1
DrNicefellow
2024-02-16T02:03:03Z
102
1
transformers
[ "transformers", "pytorch", "mixtral", "text-generation", "dataset:DrNicefellow/Quality_WorryFree_GeneralQA_Chat_Dataset-v1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-16T01:21:26Z
--- license: apache-2.0 datasets: - DrNicefellow/Quality_WorryFree_GeneralQA_Chat_Dataset-v1 --- # WorryFree_GeneralQA_Chat_Mixtral-8x7B-v1 ## Description WorryFree_GeneralQA_Chat_Mixtral-8x7B-v1 is a chat language model fine-tuned on the Quality_WorryFree_GeneralQA_Chat_Dataset-v1 dataset using the QLoRA technique. Originally based on the [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) model, this version is specifically optimized for diverse and comprehensive chat applications. ## Model Details - **Base Model**: [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) - **Fine-tuning Technique**: QLoRA (Quantum Logic-based Reasoning Approach) - **Dataset**: [DrNicefellow/Quality_WorryFree_GeneralQA_Chat_Dataset-v1](https://huggingface.co/datasets/DrNicefellow/Quality_WorryFree_GeneralQA_Chat_Dataset-v1) - **Tool Used for Fine-tuning**: [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) ## Features - Enhanced understanding and generation of conversational language. - Improved performance in diverse chat scenarios, including casual, formal, and domain-specific conversations. - Fine-tuned to maintain context and coherence over longer dialogues. ## Prompt Format Vicuna 1.1 See the finetuning dataset for examples. ## License This model is open-sourced under the Apache 2.0 License. See the LICENSE file for more details. ## Feeling Generous? 😊 Eager to buy me a cup of 2$ coffe or iced tea?🍵☕ Sure, here is the link: [https://ko-fi.com/drnicefellow](https://ko-fi.com/drnicefellow). Please add a note on which one you want me to drink?
yesj1234/jako_xlsr_100p_sup2
yesj1234
2024-02-16T02:00:32Z
63
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "./train_dataset_sup.py", "generated_from_trainer", "base_model:facebook/wav2vec2-large-xlsr-53", "base_model:finetune:facebook/wav2vec2-large-xlsr-53", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-02-16T01:58:30Z
--- license: apache-2.0 base_model: facebook/wav2vec2-large-xlsr-53 tags: - automatic-speech-recognition - ./train_dataset_sup.py - generated_from_trainer model-index: - name: finetuned_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_model This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the ./TRAIN_DATASET_SUP.PY - NA dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 30 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
JiajingChen/9
JiajingChen
2024-02-16T01:47:51Z
1
0
transformers
[ "transformers", "tensorboard", "onnx", "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "endpoints_compatible", "region:us" ]
reinforcement-learning
2024-02-11T10:58:00Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: '9' results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 27.20 +/- 22.71 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Shijia/furina_seed42_eng_esp_hau_cross_0.0001
Shijia
2024-02-16T01:37:29Z
90
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:yihongLiu/furina", "base_model:finetune:yihongLiu/furina", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-16T01:36:00Z
--- base_model: yihongLiu/furina tags: - generated_from_trainer model-index: - name: furina_seed42_eng_esp_hau_cross_0.0001 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # furina_seed42_eng_esp_hau_cross_0.0001 This model is a fine-tuned version of [yihongLiu/furina](https://huggingface.co/yihongLiu/furina) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0279 - Spearman Corr: 0.6968 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Spearman Corr | |:-------------:|:-----:|:----:|:---------------:|:-------------:| | No log | 0.48 | 200 | 0.0361 | 0.5376 | | No log | 0.97 | 400 | 0.0267 | 0.6376 | | No log | 1.45 | 600 | 0.0314 | 0.6433 | | No log | 1.94 | 800 | 0.0275 | 0.6611 | | 0.0438 | 2.42 | 1000 | 0.0256 | 0.6919 | | 0.0438 | 2.91 | 1200 | 0.0347 | 0.6921 | | 0.0438 | 3.39 | 1400 | 0.0309 | 0.6727 | | 0.0438 | 3.88 | 1600 | 0.0366 | 0.6935 | | 0.0231 | 4.36 | 1800 | 0.0281 | 0.6924 | | 0.0231 | 4.85 | 2000 | 0.0285 | 0.6941 | | 0.0231 | 5.33 | 2200 | 0.0268 | 0.6985 | | 0.0231 | 5.82 | 2400 | 0.0306 | 0.6896 | | 0.0148 | 6.3 | 2600 | 0.0279 | 0.6968 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
tsavage68/chat_1000STEPS_1e7_05beta_DPO
tsavage68
2024-02-16T01:36:11Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "dpo", "generated_from_trainer", "conversational", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:finetune:meta-llama/Llama-2-7b-chat-hf", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-16T01:32:17Z
--- base_model: meta-llama/Llama-2-7b-chat-hf tags: - trl - dpo - generated_from_trainer model-index: - name: chat_1000STEPS_1e7_05beta_DPO results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # chat_1000STEPS_1e7_05beta_DPO This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6864 - Rewards/chosen: 0.0033 - Rewards/rejected: -0.0130 - Rewards/accuracies: 0.4571 - Rewards/margins: 0.0163 - Logps/rejected: -18.8173 - Logps/chosen: -16.7381 - Logits/rejected: -0.5974 - Logits/chosen: -0.5973 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-07 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6957 | 0.2 | 100 | 0.6926 | -0.0030 | -0.0058 | 0.4132 | 0.0028 | -18.8028 | -16.7506 | -0.5972 | -0.5971 | | 0.6931 | 0.39 | 200 | 0.6899 | 0.0035 | -0.0050 | 0.4835 | 0.0085 | -18.8013 | -16.7376 | -0.5981 | -0.5980 | | 0.6783 | 0.59 | 300 | 0.6915 | -0.0059 | -0.0111 | 0.4593 | 0.0052 | -18.8135 | -16.7564 | -0.5978 | -0.5977 | | 0.6952 | 0.78 | 400 | 0.6904 | 0.0004 | -0.0075 | 0.4615 | 0.0079 | -18.8063 | -16.7439 | -0.5975 | -0.5973 | | 0.6927 | 0.98 | 500 | 0.6904 | -0.0036 | -0.0115 | 0.4396 | 0.0080 | -18.8144 | -16.7518 | -0.5981 | -0.5980 | | 0.6701 | 1.17 | 600 | 0.6878 | -0.0038 | -0.0170 | 0.4681 | 0.0132 | -18.8254 | -16.7522 | -0.5978 | -0.5977 | | 0.6796 | 1.37 | 700 | 0.6886 | -0.0031 | -0.0150 | 0.4725 | 0.0119 | -18.8213 | -16.7508 | -0.5970 | -0.5969 | | 0.6686 | 1.56 | 800 | 0.6881 | -0.0031 | -0.0158 | 0.4813 | 0.0127 | -18.8228 | -16.7508 | -0.5973 | -0.5972 | | 0.6767 | 1.76 | 900 | 0.6901 | -0.0033 | -0.0123 | 0.4440 | 0.0091 | -18.8159 | -16.7511 | -0.5972 | -0.5971 | | 0.6702 | 1.95 | 1000 | 0.6864 | 0.0033 | -0.0130 | 0.4571 | 0.0163 | -18.8173 | -16.7381 | -0.5974 | -0.5973 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.0.0+cu117 - Datasets 2.17.0 - Tokenizers 0.15.2
onlinex/stablelm-2-zephyr-1_6b-gptq-4bit
onlinex
2024-02-16T01:34:56Z
89
0
transformers
[ "transformers", "safetensors", "stablelm_epoch", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "4-bit", "gptq", "region:us" ]
text-generation
2024-02-15T22:38:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LoudAI/kubwa-2.7B-ian
LoudAI
2024-02-16T01:32:26Z
36
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "merge", "mergekit", "lazymergekit", "dalyaff/phi2-sql", "nakcnx/phi-2-sql-v1", "custom_code", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-16T01:31:41Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - dalyaff/phi2-sql - nakcnx/phi-2-sql-v1 --- # Phi-2-sql-merge-slerp Phi-2-sql-merge-slerp is a merge of the following models using [mergekit](<https://github.com/cg123/mergekit>): * [dalyaff/phi2-sql](<https://huggingface.co/>dalyaff/phi2-sql) * [nakcnx/phi-2-sql-v1](<https://huggingface.co/>nakcnx/phi-2-sql-v1) ## 🧩 Configuration ```yaml<_io.TextIOWrapper name='./config/sql_gradient-slerp.yml' mode='r' encoding='UTF-8'>
wjworld/chaoyang_adenocarcinoma_colon_slide
wjworld
2024-02-16T01:28:56Z
29
1
diffusers
[ "diffusers", "tensorboard", "safetensors", "text-to-image", "dreambooth", "stable-diffusion", "stable-diffusion-diffusers", "base_model:CompVis/stable-diffusion-v1-4", "base_model:finetune:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-02-16T01:24:25Z
--- license: creativeml-openrail-m library_name: diffusers tags: - text-to-image - dreambooth - stable-diffusion - stable-diffusion-diffusers inference: true base_model: CompVis/stable-diffusion-v1-4 instance_prompt: a photo of adenocarcinoma colon slide --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # DreamBooth - wjworld/chaoyang_adenocarcinoma_colon_slide This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of adenocarcinoma colon slide using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
antisoc-qa-assoc/pure-crest-instruct-0.1
antisoc-qa-assoc
2024-02-16T01:26:07Z
2
0
transformers
[ "transformers", "pytorch", "mixtral", "text-generation", "mergekit", "merge", "arxiv:2311.03099", "arxiv:2306.01708", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-15T17:44:00Z
--- base_model: [] tags: - mergekit - merge --- # pure-crest-instruct This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using E:\text-generation-webui\models\Mixtral-8x7B-v0.1 as a base. ### Models Merged The following models were included in the merge: * E:\text-generation-webui\models\pure-crest-0.1\merged * E:\text-generation-webui\models\Mixtral-8x7B-Instruct-v0.1 ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: E:\text-generation-webui\models\Mixtral-8x7B-Instruct-v0.1 parameters: density: 0.5 weight: 1 - model: E:\text-generation-webui\models\pure-crest-0.1\merged parameters: density: 0.5 weight: 0.5 merge_method: dare_ties base_model: E:\text-generation-webui\models\Mixtral-8x7B-v0.1 parameters: #normalize: false #int8_mask: true dtype: bfloat16 ```
macadeliccc/DrKlaus-7B
macadeliccc
2024-02-16T01:21:06Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-15T23:38:24Z
--- license: apache-2.0 --- # DrKlaus-7B ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/6455cc8d679315e4ef16fbec/E0UeNsU-zKRAwySfeCWf8.webp) DrKlaus-7B is a SFT model made with [AutoSloth](https://colab.research.google.com/drive/1Zo0sVEb2lqdsUm9dy2PTzGySxdF9CNkc#scrollTo=MmLkhAjzYyJ4) by [macadeliccc](https://huggingface.co/macadeliccc) ## Process - Original Model: [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) - Datatset: [medalpaca/medical_meadow_wikidoc_patient_information](https://huggingface.co/datasets/medalpaca/medical_meadow_wikidoc_patient_information) - Learning Rate: 3e-05 - Steps: 80 - Warmup Steps: 8 - Per Device Train Batch Size: 24 - Gradient Accumulation Steps 12 - Optimizer: adamw_8bit - Max Sequence Length: 1024 - Max Prompt Length: 512 - Max Length: 1024 ## 💻 Usage ```python !pip install -qU transformers from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline model = "macadeliccc/DrKlaus-7B" tokenizer = AutoTokenizer.from_pretrained(model) # Example prompt prompt = "Your example prompt here" # Generate a response model = AutoModelForCausalLM.from_pretrained(model) pipeline = pipeline("text-generation", model=model, tokenizer=tokenizer) outputs = pipeline(prompt, max_length=50, num_return_sequences=1) print(outputs[0]["generated_text"]) ``` <div align="center"> <img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/made%20with%20unsloth.png" height="50" align="center" /> </div>
Kquant03/Triunvirato-7b-laser
Kquant03
2024-02-16T01:01:08Z
6
2
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "mistralai/Mistral-7B-v0.1", "Kukedlc/neuronal-7b-Mlab", "mlabonne/Monarch-7B", "base_model:Kukedlc/neuronal-7b-Mlab", "base_model:merge:Kukedlc/neuronal-7b-Mlab", "base_model:mistralai/Mistral-7B-v0.1", "base_model:merge:mistralai/Mistral-7B-v0.1", "base_model:mlabonne/Monarch-7B", "base_model:merge:mlabonne/Monarch-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-16T00:15:43Z
--- tags: - merge - mergekit - lazymergekit - mistralai/Mistral-7B-v0.1 - Kukedlc/neuronal-7b-Mlab - mlabonne/Monarch-7B base_model: - mistralai/Mistral-7B-v0.1 - Kukedlc/neuronal-7b-Mlab - mlabonne/Monarch-7B --- # Triunvirato-7b Trinity-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) * [Kukedlc/neuronal-7b-Mlab](https://huggingface.co/Kukedlc/neuronal-7b-Mlab) * [mlabonne/Monarch-7B](https://huggingface.co/mlabonne/Monarch-7B) # Credit goes to [kukedlc](https://huggingface.co/Kukedlc/Triunvirato-7b) ## 🧩 Configuration ```yaml models: - model: mistralai/Mistral-7B-v0.1 parameters: density: [1, 0.7, 0.1] # density gradient weight: 1.0 - model: Kukedlc/neuronal-7b-Mlab parameters: density: 0.5 weight: [0, 0.3, 0.7, 1] # weight gradient - model: mlabonne/Monarch-7B parameters: density: 0.33 weight: - filter: mlp value: 0.5 - value: 0 merge_method: ties base_model: mistralai/Mistral-7B-v0.1 parameters: normalize: true int8_mask: true dtype: float16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Kukedlc/Triunvirato-7b" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
Shijia/furina_seed42_eng_amh_esp_basic_2e-05
Shijia
2024-02-16T00:28:22Z
101
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:yihongLiu/furina", "base_model:finetune:yihongLiu/furina", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-16T00:26:58Z
--- base_model: yihongLiu/furina tags: - generated_from_trainer model-index: - name: furina_seed42_eng_amh_esp_basic_2e-05 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # furina_seed42_eng_amh_esp_basic_2e-05 This model is a fine-tuned version of [yihongLiu/furina](https://huggingface.co/yihongLiu/furina) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0184 - Spearman Corr: 0.7633 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Spearman Corr | |:-------------:|:-----:|:----:|:---------------:|:-------------:| | No log | 1.59 | 200 | 0.0194 | 0.6961 | | 0.0713 | 3.17 | 400 | 0.0205 | 0.7332 | | 0.0215 | 4.76 | 600 | 0.0163 | 0.7600 | | 0.0164 | 6.35 | 800 | 0.0180 | 0.7671 | | 0.0164 | 7.94 | 1000 | 0.0175 | 0.7687 | | 0.0134 | 9.52 | 1200 | 0.0184 | 0.7775 | | 0.0111 | 11.11 | 1400 | 0.0161 | 0.7727 | | 0.0093 | 12.7 | 1600 | 0.0169 | 0.7679 | | 0.0078 | 14.29 | 1800 | 0.0182 | 0.7689 | | 0.0078 | 15.87 | 2000 | 0.0187 | 0.7668 | | 0.0071 | 17.46 | 2200 | 0.0188 | 0.7705 | | 0.006 | 19.05 | 2400 | 0.0181 | 0.7702 | | 0.0056 | 20.63 | 2600 | 0.0176 | 0.7625 | | 0.0051 | 22.22 | 2800 | 0.0186 | 0.7680 | | 0.0051 | 23.81 | 3000 | 0.0184 | 0.7633 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
ahmed13377/bart-samsum-finetuning
ahmed13377
2024-02-16T00:27:03Z
91
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-16T00:26:51Z
--- license: apache-2.0 base_model: google-t5/t5-small tags: - generated_from_trainer model-index: - name: bart-samsum-finetuning results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-samsum-finetuning This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3737 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.3577 | 1.0 | 19 | 0.4668 | | 0.2972 | 2.0 | 38 | 0.4162 | | 0.2621 | 3.0 | 57 | 0.3923 | | 0.2692 | 4.0 | 76 | 0.3791 | | 0.2694 | 5.0 | 95 | 0.3737 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
arun100/whisper-base-vi-2
arun100
2024-02-16T00:21:40Z
60
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "whisper-event", "generated_from_trainer", "dataset:google/fleurs", "base_model:arun100/whisper-base-vi-1", "base_model:finetune:arun100/whisper-base-vi-1", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-02-15T18:22:24Z
--- license: apache-2.0 base_model: arun100/whisper-base-vi-1 tags: - whisper-event - generated_from_trainer datasets: - google/fleurs metrics: - wer model-index: - name: Whisper Base Vietnamese results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: google/fleurs vi_vn type: google/fleurs config: vi_vn split: test args: vi_vn metrics: - name: Wer type: wer value: 31.03382013835511 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Base Vietnamese This model is a fine-tuned version of [arun100/whisper-base-vi-1](https://huggingface.co/arun100/whisper-base-vi-1) on the google/fleurs vi_vn dataset. It achieves the following results on the evaluation set: - Loss: 0.6949 - Wer: 31.0338 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.5823 | 43.0 | 500 | 0.7964 | 37.8978 | | 0.3312 | 86.0 | 1000 | 0.6997 | 33.7125 | | 0.2009 | 130.0 | 1500 | 0.6784 | 32.7479 | | 0.1271 | 173.0 | 2000 | 0.6760 | 31.9985 | | 0.0815 | 217.0 | 2500 | 0.6799 | 31.3028 | | 0.0561 | 260.0 | 3000 | 0.6851 | 31.2337 | | 0.0438 | 304.0 | 3500 | 0.6896 | 31.7256 | | 0.0367 | 347.0 | 4000 | 0.6928 | 31.5949 | | 0.0331 | 391.0 | 4500 | 0.6949 | 31.0338 | | 0.0317 | 434.0 | 5000 | 0.6957 | 31.0453 | ### Framework versions - Transformers 4.37.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.16.2.dev0 - Tokenizers 0.15.0
micfort/output
micfort
2024-02-16T00:17:10Z
0
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "text-to-image", "dreambooth", "stable-diffusion", "stable-diffusion-diffusers", "base_model:CompVis/stable-diffusion-v1-4", "base_model:finetune:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-02-15T21:42:48Z
--- license: creativeml-openrail-m library_name: diffusers tags: - text-to-image - dreambooth - stable-diffusion - stable-diffusion-diffusers - text-to-image - dreambooth - stable-diffusion - stable-diffusion-diffusers inference: true base_model: CompVis/stable-diffusion-v1-4 instance_prompt: a photo of sks dog --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # DreamBooth - micfort/output This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
l52227215/123
l52227215
2024-02-16T00:17:09Z
0
0
null
[ "license:other", "region:us" ]
null
2024-02-16T00:17:09Z
--- license: other license_name: '123' license_link: LICENSE ---
Shijia/furina_seed42_eng_amh_esp_basic_0.0001
Shijia
2024-02-16T00:15:53Z
90
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:yihongLiu/furina", "base_model:finetune:yihongLiu/furina", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-16T00:14:31Z
--- base_model: yihongLiu/furina tags: - generated_from_trainer model-index: - name: furina_seed42_eng_amh_esp_basic_0.0001 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # furina_seed42_eng_amh_esp_basic_0.0001 This model is a fine-tuned version of [yihongLiu/furina](https://huggingface.co/yihongLiu/furina) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0216 - Spearman Corr: 0.7654 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Spearman Corr | |:-------------:|:-----:|:----:|:---------------:|:-------------:| | No log | 1.59 | 200 | 0.0271 | 0.7362 | | 0.0397 | 3.17 | 400 | 0.0172 | 0.7582 | | 0.0162 | 4.76 | 600 | 0.0243 | 0.7402 | | 0.0094 | 6.35 | 800 | 0.0212 | 0.7563 | | 0.0094 | 7.94 | 1000 | 0.0300 | 0.7421 | | 0.0066 | 9.52 | 1200 | 0.0228 | 0.7595 | | 0.0049 | 11.11 | 1400 | 0.0244 | 0.7605 | | 0.0042 | 12.7 | 1600 | 0.0199 | 0.7624 | | 0.0034 | 14.29 | 1800 | 0.0198 | 0.7566 | | 0.0034 | 15.87 | 2000 | 0.0216 | 0.7654 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
TeeZee/llama-2-7B-pirate-speech-QLORA-60-steps
TeeZee
2024-02-16T00:12:19Z
61
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "unsloth", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "region:us" ]
text-generation
2024-02-16T00:09:02Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AlexanderHolmes0/Llama-2-7b-hf-sentiment-2
AlexanderHolmes0
2024-02-16T00:11:49Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-02-16T00:05:53Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
max129/lab1_finetuning
max129
2024-02-16T00:09:28Z
119
0
transformers
[ "transformers", "safetensors", "marian", "text2text-generation", "generated_from_trainer", "dataset:kde4", "base_model:Helsinki-NLP/opus-mt-en-fr", "base_model:finetune:Helsinki-NLP/opus-mt-en-fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-15T22:26:01Z
--- license: apache-2.0 base_model: Helsinki-NLP/opus-mt-en-fr tags: - generated_from_trainer datasets: - kde4 model-index: - name: lab1_finetuning results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lab1_finetuning This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
NilanE/karasu-translation-2
NilanE
2024-02-16T00:05:19Z
91
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-02-16T00:01:47Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: karasu-web --- # Uploaded model - **Developed by:** NilanE - **License:** apache-2.0 - **Finetuned from model :** karasu-web This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
rwongsing/ppo-LunarLander-v2
rwongsing
2024-02-16T00:00:51Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-02-16T00:00:27Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 248.36 +/- 17.94 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
SeifGad/FB-xglm-Nuclear
SeifGad
2024-02-15T23:57:16Z
77
0
transformers
[ "transformers", "safetensors", "xglm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-02-15T23:56:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mnemic/comic_speechbubble_yolov8
mnemic
2024-02-15T23:43:59Z
0
1
null
[ "region:us" ]
null
2024-02-15T23:32:42Z
--- {} --- **This model is only meant for research purposes.** The model is entirely trained on the following dataset: [yolomanga/speechballoon_comic](https://universe.roboflow.com/yolomanga/speechballoon_comic) However, since the dataset is created entirely out of Marvel comic book panels, I think the original author cannot licence the images as CC4. I do not think this model can ba used commercially either. ---- A Yolov8 detection model that detects comic book speech bubbles and sound effects in images. The model can be used as an [ADetailer](https://github.com/Bing-su/adetailer) model (for [Automatic1111](https://github.com/AUTOMATIC1111/) / Stable Diffusion use), or using other [inference scripts](https://github.com/MNeMoNiCuZ/yolov8-scripts) to return detection bounding boxes of watermarks. A small tutorial on how to use the model can be found on this Github: https://github.com/MNeMoNiCuZ/yolov8-scripts or this [CivitAI article](https://civitai.com/articles/4080/training-a-custom-adetailer-model-with-yolov8-detection-model). comic_speechbubble_m_yolov8_v1: ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/644ed23467c9458c913059ff/hjm3kn-G7mvqUp0RJ2ak5.jpeg) comic_speechbubble_s_yolov8_v1 ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/644ed23467c9458c913059ff/9fMnkl_C7o697jNy-zNHB.jpeg)
mnemic/nsfw_watermarks_yolov8
mnemic
2024-02-15T23:42:17Z
0
3
null
[ "license:cc-by-4.0", "region:us" ]
null
2024-02-15T23:07:20Z
--- license: cc-by-4.0 --- A Yolov8 detection model that detects watermarks in images. The model can be used as an [ADetailer](https://github.com/Bing-su/adetailer) model (for [Automatic1111](https://github.com/AUTOMATIC1111/) / Stable Diffusion use), or using other [inference scripts](https://github.com/MNeMoNiCuZ/yolov8-scripts) to return detection bounding boxes of watermarks. The model is trained partially on the following dataset: [MFW-feoki/W6-janF](https://universe.roboflow.com/mfw-feoki/w6_janf), and partially with synthetic NSFW data. A small tutorial on how to use the model can be found on this Github: https://github.com/MNeMoNiCuZ/yolov8-scripts or this [CivitAI article](https://civitai.com/articles/4080/training-a-custom-adetailer-model-with-yolov8-detection-model). ![image/png](https://cdn-uploads.huggingface.co/production/uploads/644ed23467c9458c913059ff/liKPKMwioXSROKUSUVexj.png)
mnemic/watermarks_yolov8
mnemic
2024-02-15T23:41:53Z
0
11
null
[ "license:cc-by-4.0", "region:us" ]
null
2024-02-15T23:05:45Z
--- license: cc-by-4.0 --- A Yolov8 detection model that detects watermarks in images. The model can be used as an [ADetailer](https://github.com/Bing-su/adetailer) model (for [Automatic1111](https://github.com/AUTOMATIC1111/) / Stable Diffusion use), or using other [inference scripts](https://github.com/MNeMoNiCuZ/yolov8-scripts) to return detection bounding boxes of watermarks. The model is entirely trained on the following dataset: [MFW-feoki/W6-janF](https://universe.roboflow.com/mfw-feoki/w6_janf) A small tutorial on how to use the model can be found on this Github: https://github.com/MNeMoNiCuZ/yolov8-scripts or this [CivitAI article](https://civitai.com/articles/4080/training-a-custom-adetailer-model-with-yolov8-detection-model). ![image/png](https://cdn-uploads.huggingface.co/production/uploads/644ed23467c9458c913059ff/0bXYRgAzfJHsH0Ty50bz7.png)
AntoineGourru/Mistral_qlora_drome_R512A1024BS1E3
AntoineGourru
2024-02-15T23:38:53Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2", "region:us" ]
null
2024-02-15T23:37:05Z
--- library_name: peft base_model: mistralai/Mistral-7B-Instruct-v0.2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.7.0
NovoCode/NeuralPaca-7b
NovoCode
2024-02-15T23:28:49Z
2
0
peft
[ "peft", "safetensors", "llama-factory", "lora", "generated_from_trainer", "base_model:Kquant03/NeuralTrix-7B-dpo-laser", "base_model:adapter:Kquant03/NeuralTrix-7B-dpo-laser", "license:other", "region:us" ]
null
2024-02-15T23:26:31Z
--- license: other library_name: peft tags: - llama-factory - lora - generated_from_trainer base_model: Kquant03/NeuralTrix-7B-dpo-laser model-index: - name: train_2024-02-15-20-15-48 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # train_2024-02-15-20-15-48 This model is a fine-tuned version of [Kquant03/NeuralTrix-7B-dpo-laser](https://huggingface.co/Kquant03/NeuralTrix-7B-dpo-laser) on the alpaca_en dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.2.0+cu121 - Datasets 2.16.0 - Tokenizers 0.15.2
tmohoric-ewc/safer-skin
tmohoric-ewc
2024-02-15T23:21:29Z
0
0
sklearn
[ "sklearn", "skops", "tabular-regression", "license:mit", "region:us" ]
tabular-regression
2024-02-15T23:19:43Z
--- license: mit library_name: sklearn tags: - sklearn - skops - tabular-regression model_format: pickle model_file: MLR-model.pkl widget: - structuredData: CAS: - 696-71-9 - 94-02-0 - 15128-82-2 CID: - 12766.0 - 7170.0 - 27057.0 CanonicalSMILES: - canonical: OC1CCCCCCC1 original: C1CCCC(CCC1)O - canonical: CCOC(=O)CC(=O)c1ccccc1 original: CCOC(=O)CC(=O)C1=CC=CC=C1 - canonical: O=[N+]([O-])c1ncccc1O original: C1=CC(=C(N=C1)[N+](=O)[O-])O Cor1-C420 Adduct (M+H): - no Adduct - no Adduct - no Adduct Cor1-C420 Depletion 24 h (%): - 1.0 - 1.0 - 1.0 Cor1-C420 Dimer (%): - 2.0 - 5.0 - 4.0 Cor1-C420 Kmax (1/mM/min): - 6.979399898264935e-06 - 6.979399898264935e-06 - 6.979399898264935e-06 DPRA Cysteine depletion (%): - .nan - 11.2 - .nan DPRA Lysine depletion (%): - .nan - 0.9 - .nan InChI: - InChI=1S/C8H16O/c9-8-6-4-2-1-3-5-7-8/h8-9H,1-7H2 - InChI=1S/C11H12O3/c1-2-14-11(13)8-10(12)9-6-4-3-5-7-9/h3-7H,2,8H2,1H3 - InChI=1S/C5H4N2O3/c8-4-2-1-3-6-5(4)7(9)10/h1-3,8H InChIKey: - FHADSMKORVFYOS-UHFFFAOYSA-N - GKKZMYDNDDMXSE-UHFFFAOYSA-N - QBPDSKPWYWIHGA-UHFFFAOYSA-N IsomericSMILES: - canonical: OC1CCCCCCC1 original: C1CCCC(CCC1)O - canonical: CCOC(=O)CC(=O)c1ccccc1 original: CCOC(=O)CC(=O)C1=CC=CC=C1 - canonical: O=[N+]([O-])c1ncccc1O original: C1=CC(=C(N=C1)[N+](=O)[O-])O KeratinoSens EC1.5 (uM): - 249.6822169 - 62.9764329 - 4000.0 KeratinoSens EC3 (uM): - 4000.0 - 689.0 - 4000.0 KeratinoSens IC50 (uM): - 4000.0 - 4000.0 - 4000.0 KeratinoSens Imax: - 2.830997136 - 3.299878249 - 1.036847118 KeratinoSens Log EC1.5 (uM): - 2.3973876117256947 - 1.7991780577657597 - 3.6020599913279625 KeratinoSens Log IC50 (uM): - 3.6020599913279625 - 3.6020599913279625 - 3.6020599913279625 LLNA EC3 (%): - 100.0 - 100.0 - 100.0 LLNA Log EC3 (%): - 2.0 - 2.0 - 2.0 MW: - 128.21 - 192.21 - 140.1 OPERA Boiling point (°C): - 186.863 - 276.068 - 323.069 OPERA Henry constant (atm/m3): - 7.84426e-06 - 5.86618e-07 - 9.47507e-08 OPERA Log D at pH 5.5: - 2.36 - 1.87 - -0.01 OPERA Log D at pH 7.4: - 2.36 - 1.87 - -1.69 OPERA Melting point (°C): - 25.1423 - 49.3271 - 128.292 OPERA Octanol-air partition coefficient Log Koa: - 6.08747 - 6.56126 - 6.36287 OPERA Octanol-water partition coefficient LogP: - 2.3597 - 1.86704 - 0.398541 OPERA Vapour pressure (mm Hg): - 0.0839894 - 0.000406705 - 0.00472604 OPERA Water solubility (mol/L): - 0.0510404 - 0.01476 - 0.0416421 OPERA pKaa: - 10.68 - .nan - 5.31 OPERA pKab: - .nan - .nan - .nan SMILES: - canonical: OC1CCCCCCC1 original: OC1CCCCCCC1 - canonical: CCOC(=O)CC(=O)c1ccccc1 original: CCOC(=O)CC(=O)c1ccccc1 - canonical: O=[N+]([O-])c1ncccc1O original: OC1=CC=CN=C1[N+]([O-])=O TIMES Log Vapour pressure (Pa): - 0.8564932564458658 - -0.2851674875666674 - -0.9385475209128068 Vapour pressure (Pa): - 7.1861 - 0.5186 - 0.1152 cLogP: - 2.285000000003492 - 1.206000000005588 - 0.5590000000020154 hCLAT CV75 (ug/mL): - .nan - 571.0951916 - .nan hCLAT Call: - .nan - 0.0 - .nan hCLAT EC150 (ug/mL): - .nan - .nan - .nan hCLAT EC200 (ug/mL): - .nan - .nan - .nan hCLAT MIT (ug/mL): - .nan - .nan - .nan kDPRA Call: [] kDPRA Log rate (1/s/M): - .nan - .nan - .nan --- # Model description [More Information Needed] ## Intended uses & limitations [More Information Needed] ## Training Procedure [More Information Needed] ### Hyperparameters <details> <summary> Click to expand </summary> | Hyperparameter | Value | |------------------|---------| | copy_X | True | | fit_intercept | True | | n_jobs | | | positive | False | </details> ### Model Plot <style>#sk-container-id-1 {color: black;background-color: white;}#sk-container-id-1 pre{padding: 0;}#sk-container-id-1 div.sk-toggleable {background-color: white;}#sk-container-id-1 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-1 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-1 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-1 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-1 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-1 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-1 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-1 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-1 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-1 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-1 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-1 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-1 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-1 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-1 div.sk-item {position: relative;z-index: 1;}#sk-container-id-1 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-1 div.sk-item::before, #sk-container-id-1 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-1 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-1 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-1 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-1 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-1 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-1 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-1 div.sk-label-container {text-align: center;}#sk-container-id-1 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-1 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-1" class="sk-top-container" style="overflow: auto;"><div class="sk-text-repr-fallback"><pre>LinearRegression()</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-1" type="checkbox" checked><label for="sk-estimator-id-1" class="sk-toggleable__label sk-toggleable__label-arrow">LinearRegression</label><div class="sk-toggleable__content"><pre>LinearRegression()</pre></div></div></div></div></div> ## Evaluation Results [More Information Needed] # How to Get Started with the Model [More Information Needed] # Model Card Authors This model card is written by following authors: [More Information Needed] # Model Card Contact You can contact the model card authors through following channels: [More Information Needed] # Citation Below you can find information related to citation. **BibTeX:** ``` [More Information Needed] ``` # model_card_authors Tomaz Mohoric # limitations This model is intended for educational purposes. # model_description This is a multiple linear regression model on a skin sensitisation dataset.
nolo99/openhermes-mistral-dpo-gptq
nolo99
2024-02-15T23:16:20Z
0
0
null
[ "tensorboard", "safetensors", "trl", "dpo", "generated_from_trainer", "base_model:TheBloke/OpenHermes-2-Mistral-7B-GPTQ", "base_model:finetune:TheBloke/OpenHermes-2-Mistral-7B-GPTQ", "license:apache-2.0", "region:us" ]
null
2024-02-15T23:05:53Z
--- license: apache-2.0 base_model: TheBloke/OpenHermes-2-Mistral-7B-GPTQ tags: - trl - dpo - generated_from_trainer model-index: - name: openhermes-mistral-dpo-gptq results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # openhermes-mistral-dpo-gptq This model is a fine-tuned version of [TheBloke/OpenHermes-2-Mistral-7B-GPTQ](https://huggingface.co/TheBloke/OpenHermes-2-Mistral-7B-GPTQ) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8471 - Rewards/chosen: -0.2589 - Rewards/rejected: -0.1510 - Rewards/accuracies: 0.375 - Rewards/margins: -0.1079 - Logps/rejected: -116.0277 - Logps/chosen: -111.7328 - Logits/rejected: -2.2331 - Logits/chosen: -2.3546 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - training_steps: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6755 | 0.1 | 10 | 0.7298 | -0.0301 | -0.0035 | 0.375 | -0.0266 | -114.5520 | -109.4439 | -2.2395 | -2.3722 | | 0.6379 | 0.2 | 20 | 0.7804 | -0.1600 | -0.1132 | 0.375 | -0.0468 | -115.6494 | -110.7433 | -2.2341 | -2.3621 | | 0.7061 | 0.3 | 30 | 0.8180 | -0.2242 | -0.1463 | 0.375 | -0.0779 | -115.9803 | -111.3849 | -2.2357 | -2.3577 | | 0.6503 | 0.4 | 40 | 0.8460 | -0.2548 | -0.1442 | 0.375 | -0.1106 | -115.9595 | -111.6915 | -2.2330 | -2.3554 | | 0.9618 | 0.5 | 50 | 0.8471 | -0.2589 | -0.1510 | 0.375 | -0.1079 | -116.0277 | -111.7328 | -2.2331 | -2.3546 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.0.1+cu117 - Datasets 2.17.0 - Tokenizers 0.15.2
crumbly/cramp-25m
crumbly
2024-02-15T23:13:36Z
99
8
transformers
[ "transformers", "pytorch", "gpt2a", "text-generation", "custom_code", "en", "dataset:cerebras/SlimPajama-627B", "dataset:togethercomputer/RedPajama-Data-1T", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-generation
2023-09-28T19:20:47Z
--- license: apache-2.0 datasets: - cerebras/SlimPajama-627B - togethercomputer/RedPajama-Data-1T language: - en --- A modified GPT-2 model with only 25 million non-embedding params that outbenches GPT-2(124m), Pythia-70m/160m, and Cerebras-111m, it has ScaledSinusoidal position embeddings, embedding layernorm, no biases, and was trained on only 8 billion tokens of the SlimPajama dataset at home on 2xA6000. (On the graphic it's mis-labeled as cramp-41m) **OLD BENCHMARK** | model | avg | arc | hellaswag | mmlu | truthfulqa | | --- | --- | --- | --- | --- | --- | | cramp-25m | 30.57 | 21.76 | 27.35 | 25.53 | 47.66 | | gpt2 (125m) | 30.06 | 22.1 | 31.6 | 25.86 | 40.67 | | pythia 70m deduped | 30.25 | 21.08 | 27.17 | 25.26 | 47.51 | | pythia 70m | 30.46 | 21.59 | 27.29 | 25.9 | 47.06 | | pythia 160m deduped | 31.16 | 24.06 | 30.34 | 24.95 | 44.34 | | pythia 160m | 30.58 | 22.78 | 30.34 | 24.95 | 44.26 | ***NEW BENCHMARK** | Tasks |Version|Filter|n-shot| Metric |Value | |Stderr| |-------------|------:|------|-----:|--------|-----:|---|-----:| |arc_challenge| 1|none | 25|acc |0.1724|± |0.0110| | | |none | 25|acc_norm|0.2031|± |0.0118| |truthfulqa_mc2| 2|none | 0|acc |0.4767|± |0.0156| |hellaswag| 1|none | 10|acc |0.2687|± |0.0044| | | |none | 10|acc_norm|0.2773|± |0.0045| |winogrande| 1|none | 5|acc |0.5028|± |0.0141| *MMLU* | Tasks |Version|Filter|n-shot|Metric|Value | |Stderr| |-----------------------------------|------:|------|-----:|------|-----:|---|-----:| |world_religions | 0|none | 5|acc |0.1813|± |0.0295| |virology | 0|none | 5|acc |0.1928|± |0.0307| |us_foreign_policy | 0|none | 5|acc |0.2900|± |0.0456| |sociology | 0|none | 5|acc |0.2438|± |0.0304| |security_studies | 0|none | 5|acc |0.2367|± |0.0272| |public_relations | 0|none | 5|acc |0.2455|± |0.0412| |professional_psychology | 0|none | 5|acc |0.2271|± |0.0169| |professional_medicine | 0|none | 5|acc |0.4375|± |0.0301| |professional_law | 0|none | 5|acc |0.2490|± |0.0110| |professional_accounting | 0|none | 5|acc |0.2589|± |0.0261| |prehistory | 0|none | 5|acc |0.2963|± |0.0254| |philosophy | 0|none | 5|acc |0.2315|± |0.0240| |nutrition | 0|none | 5|acc |0.2222|± |0.0238| |moral_scenarios | 0|none | 5|acc |0.2313|± |0.0141| |moral_disputes | 0|none | 5|acc |0.2168|± |0.0222| |miscellaneous | 0|none | 5|acc |0.2708|± |0.0159| |medical_genetics | 0|none | 5|acc |0.3000|± |0.0461| |marketing | 0|none | 5|acc |0.1923|± |0.0258| |management | 0|none | 5|acc |0.1942|± |0.0392| |machine_learning | 0|none | 5|acc |0.2054|± |0.0383| |logical_fallacies | 0|none | 5|acc |0.2393|± |0.0335| |jurisprudence | 0|none | 5|acc |0.2130|± |0.0396| |international_law | 0|none | 5|acc |0.2562|± |0.0398| |human_sexuality | 0|none | 5|acc |0.2366|± |0.0373| |human_aging | 0|none | 5|acc |0.2063|± |0.0272| |high_school_world_history | 0|none | 5|acc |0.2700|± |0.0289| |high_school_us_history | 0|none | 5|acc |0.2206|± |0.0291| |high_school_statistics | 0|none | 5|acc |0.4722|± |0.0340| |high_school_psychology | 0|none | 5|acc |0.2257|± |0.0179| |high_school_physics | 0|none | 5|acc |0.2384|± |0.0348| |high_school_microeconomics | 0|none | 5|acc |0.3403|± |0.0308| |high_school_mathematics | 0|none | 5|acc |0.2630|± |0.0268| |high_school_macroeconomics | 0|none | 5|acc |0.2051|± |0.0205| |high_school_government_and_politics| 0|none | 5|acc |0.2280|± |0.0303| |high_school_geography | 0|none | 5|acc |0.3535|± |0.0341| |high_school_european_history | 0|none | 5|acc |0.2909|± |0.0355| |high_school_computer_science | 0|none | 5|acc |0.2400|± |0.0429| |high_school_chemistry | 0|none | 5|acc |0.2759|± |0.0314| |high_school_biology | 0|none | 5|acc |0.3161|± |0.0265| |global_facts | 0|none | 5|acc |0.2000|± |0.0402| |formal_logic | 0|none | 5|acc |0.1825|± |0.0346| |elementary_mathematics | 0|none | 5|acc |0.2566|± |0.0225| |electrical_engineering | 0|none | 5|acc |0.2414|± |0.0357| |econometrics | 0|none | 5|acc |0.2544|± |0.0410| |conceptual_physics | 0|none | 5|acc |0.2809|± |0.0294| |computer_security | 0|none | 5|acc |0.2000|± |0.0402| |college_physics | 0|none | 5|acc |0.3431|± |0.0472| |college_medicine | 0|none | 5|acc |0.2197|± |0.0316| |college_mathematics | 0|none | 5|acc |0.3100|± |0.0465| |college_computer_science | 0|none | 5|acc |0.3100|± |0.0465| |college_chemistry | 0|none | 5|acc |0.3400|± |0.0476| |college_biology | 0|none | 5|acc |0.2083|± |0.0340| |clinical_knowledge | 0|none | 5|acc |0.2189|± |0.0254| |business_ethics | 0|none | 5|acc |0.2000|± |0.0402| |astronomy | 0|none | 5|acc |0.2237|± |0.0339| |anatomy | 0|none | 5|acc |0.3333|± |0.0407| |abstract_algebra | 0|none | 5|acc |0.2200|± |0.0416| ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6079949388160e14e4e2e499/NzTdlxtBDp4drBRZgJiXt.png)
cnrcastroli/drpairForm2Checkboxes10kList
cnrcastroli
2024-02-15T23:08:00Z
16
0
transformers
[ "transformers", "safetensors", "vision-encoder-decoder", "image-text-to-text", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-02-14T19:55:20Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Shijia/furina_seed42_eng_amh_hau_basic_0.0001
Shijia
2024-02-15T23:03:47Z
90
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:yihongLiu/furina", "base_model:finetune:yihongLiu/furina", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-15T23:02:58Z
--- base_model: yihongLiu/furina tags: - generated_from_trainer model-index: - name: furina_seed42_eng_amh_hau_basic_0.0001 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # furina_seed42_eng_amh_hau_basic_0.0001 This model is a fine-tuned version of [yihongLiu/furina](https://huggingface.co/yihongLiu/furina) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0338 - Spearman Corr: 0.7400 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Spearman Corr | |:-------------:|:-----:|:----:|:---------------:|:-------------:| | No log | 1.55 | 200 | 0.0268 | 0.6751 | | 0.0634 | 3.1 | 400 | 0.0365 | 0.7191 | | 0.0233 | 4.65 | 600 | 0.0237 | 0.7350 | | 0.0152 | 6.2 | 800 | 0.0311 | 0.7443 | | 0.0152 | 7.75 | 1000 | 0.0321 | 0.7341 | | 0.0108 | 9.3 | 1200 | 0.0303 | 0.7293 | | 0.0078 | 10.85 | 1400 | 0.0301 | 0.7334 | | 0.0062 | 12.4 | 1600 | 0.0368 | 0.7249 | | 0.005 | 13.95 | 1800 | 0.0377 | 0.7439 | | 0.005 | 15.5 | 2000 | 0.0327 | 0.7443 | | 0.0044 | 17.05 | 2200 | 0.0338 | 0.7400 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
yoon1000/Korean_STT_v0
yoon1000
2024-02-15T23:01:53Z
186
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "base_model:facebook/wav2vec2-xls-r-300m", "base_model:finetune:facebook/wav2vec2-xls-r-300m", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-02-13T00:09:12Z
--- license: apache-2.0 base_model: facebook/wav2vec2-xls-r-300m tags: - generated_from_trainer model-index: - name: ft_0213_korean results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ft_0213_korean This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6093 - Cer: 0.0958 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 24.3697 | 0.17 | 500 | 5.0804 | 1.0 | | 4.8016 | 0.34 | 1000 | 5.1173 | 1.0 | | 4.6791 | 0.51 | 1500 | 4.7037 | 1.0000 | | 4.562 | 0.68 | 2000 | 4.6273 | 0.9779 | | 4.4539 | 0.84 | 2500 | 4.2212 | 0.9370 | | 3.5358 | 1.01 | 3000 | 2.7001 | 0.5326 | | 2.6771 | 1.18 | 3500 | 2.1532 | 0.4519 | | 2.2226 | 1.35 | 4000 | 1.7409 | 0.3787 | | 1.9143 | 1.52 | 4500 | 1.4978 | 0.3372 | | 1.6892 | 1.69 | 5000 | 1.3429 | 0.3112 | | 1.5503 | 1.86 | 5500 | 1.1997 | 0.2812 | | 1.4184 | 2.03 | 6000 | 1.1011 | 0.2624 | | 1.2758 | 2.19 | 6500 | 1.0286 | 0.2551 | | 1.2045 | 2.36 | 7000 | 0.9572 | 0.2373 | | 1.1666 | 2.53 | 7500 | 0.9170 | 0.2251 | | 1.1007 | 2.7 | 8000 | 0.8521 | 0.2142 | | 1.0391 | 2.87 | 8500 | 0.8260 | 0.2140 | | 0.9761 | 3.04 | 9000 | 0.8005 | 0.2071 | | 0.9166 | 3.21 | 9500 | 0.7572 | 0.1941 | | 0.864 | 3.38 | 10000 | 0.7375 | 0.1935 | | 0.8579 | 3.54 | 10500 | 0.7404 | 0.1933 | | 0.8442 | 3.71 | 11000 | 0.7080 | 0.1799 | | 0.8114 | 3.88 | 11500 | 0.6816 | 0.1766 | | 0.7863 | 4.05 | 12000 | 0.6921 | 0.1753 | | 0.7454 | 4.22 | 12500 | 0.6831 | 0.1759 | | 0.7077 | 4.39 | 13000 | 0.6610 | 0.1689 | | 0.6974 | 4.56 | 13500 | 0.6864 | 0.1687 | | 0.7001 | 4.73 | 14000 | 0.6450 | 0.1641 | | 0.6636 | 4.9 | 14500 | 0.6303 | 0.1585 | | 0.6423 | 5.06 | 15000 | 0.6465 | 0.1597 | | 0.5828 | 5.23 | 15500 | 0.6224 | 0.1550 | | 0.6085 | 5.4 | 16000 | 0.6154 | 0.1534 | | 0.5877 | 5.57 | 16500 | 0.6112 | 0.1510 | | 0.586 | 5.74 | 17000 | 0.6022 | 0.1485 | | 0.5656 | 5.91 | 17500 | 0.6022 | 0.1491 | | 0.5366 | 6.08 | 18000 | 0.5894 | 0.1468 | | 0.5134 | 6.25 | 18500 | 0.5779 | 0.1435 | | 0.5217 | 6.41 | 19000 | 0.5960 | 0.1449 | | 0.5049 | 6.58 | 19500 | 0.5813 | 0.1408 | | 0.4961 | 6.75 | 20000 | 0.5582 | 0.1382 | | 0.5089 | 6.92 | 20500 | 0.5898 | 0.1385 | | 0.4769 | 7.09 | 21000 | 0.5739 | 0.1361 | | 0.4552 | 7.26 | 21500 | 0.5700 | 0.1369 | | 0.4552 | 7.43 | 22000 | 0.5956 | 0.1367 | | 0.4476 | 7.6 | 22500 | 0.5885 | 0.1342 | | 0.4449 | 7.77 | 23000 | 0.5501 | 0.1314 | | 0.4333 | 7.93 | 23500 | 0.5474 | 0.1302 | | 0.3946 | 8.1 | 24000 | 0.6018 | 0.1327 | | 0.3993 | 8.27 | 24500 | 0.5680 | 0.1295 | | 0.3892 | 8.44 | 25000 | 0.5575 | 0.1309 | | 0.3936 | 8.61 | 25500 | 0.5666 | 0.1288 | | 0.3957 | 8.78 | 26000 | 0.5546 | 0.1262 | | 0.4006 | 8.95 | 26500 | 0.5702 | 0.1264 | | 0.3456 | 9.12 | 27000 | 0.5614 | 0.1247 | | 0.3459 | 9.28 | 27500 | 0.5608 | 0.1242 | | 0.3511 | 9.45 | 28000 | 0.5527 | 0.1236 | | 0.3504 | 9.62 | 28500 | 0.5479 | 0.1201 | | 0.3529 | 9.79 | 29000 | 0.5525 | 0.1200 | | 0.3397 | 9.96 | 29500 | 0.5451 | 0.1201 | | 0.314 | 10.13 | 30000 | 0.5549 | 0.1184 | | 0.3048 | 10.3 | 30500 | 0.5616 | 0.1180 | | 0.3021 | 10.47 | 31000 | 0.5634 | 0.1184 | | 0.3136 | 10.63 | 31500 | 0.5753 | 0.1166 | | 0.3116 | 10.8 | 32000 | 0.5410 | 0.1149 | | 0.3098 | 10.97 | 32500 | 0.5354 | 0.1143 | | 0.2852 | 11.14 | 33000 | 0.5482 | 0.1144 | | 0.2807 | 11.31 | 33500 | 0.5465 | 0.1126 | | 0.2771 | 11.48 | 34000 | 0.5452 | 0.1147 | | 0.2865 | 11.65 | 34500 | 0.5538 | 0.1128 | | 0.2783 | 11.82 | 35000 | 0.5374 | 0.1118 | | 0.2775 | 11.99 | 35500 | 0.5418 | 0.1121 | | 0.2649 | 12.15 | 36000 | 0.5468 | 0.1104 | | 0.2558 | 12.32 | 36500 | 0.5498 | 0.1108 | | 0.2632 | 12.49 | 37000 | 0.5699 | 0.1118 | | 0.2488 | 12.66 | 37500 | 0.5523 | 0.1088 | | 0.2552 | 12.83 | 38000 | 0.5532 | 0.1090 | | 0.2577 | 13.0 | 38500 | 0.5480 | 0.1078 | | 0.2334 | 13.17 | 39000 | 0.5716 | 0.1078 | | 0.2387 | 13.34 | 39500 | 0.5740 | 0.1080 | | 0.2364 | 13.5 | 40000 | 0.5587 | 0.1066 | | 0.2253 | 13.67 | 40500 | 0.5544 | 0.1071 | | 0.2536 | 13.84 | 41000 | 0.5680 | 0.1055 | | 0.2254 | 14.01 | 41500 | 0.5605 | 0.1058 | | 0.2207 | 14.18 | 42000 | 0.5776 | 0.1049 | | 0.2127 | 14.35 | 42500 | 0.5762 | 0.1046 | | 0.2121 | 14.52 | 43000 | 0.5637 | 0.1043 | | 0.2048 | 14.69 | 43500 | 0.5647 | 0.1048 | | 0.2085 | 14.85 | 44000 | 0.5658 | 0.1032 | | 0.2031 | 15.02 | 44500 | 0.5789 | 0.1026 | | 0.1923 | 15.19 | 45000 | 0.5627 | 0.1011 | | 0.1956 | 15.36 | 45500 | 0.5698 | 0.1016 | | 0.1989 | 15.53 | 46000 | 0.5950 | 0.1016 | | 0.1996 | 15.7 | 46500 | 0.5833 | 0.1003 | | 0.1895 | 15.87 | 47000 | 0.5872 | 0.1003 | | 0.1893 | 16.04 | 47500 | 0.5861 | 0.1001 | | 0.1837 | 16.21 | 48000 | 0.5947 | 0.0998 | | 0.1875 | 16.37 | 48500 | 0.5898 | 0.0994 | | 0.1773 | 16.54 | 49000 | 0.5885 | 0.1001 | | 0.1834 | 16.71 | 49500 | 0.5964 | 0.0995 | | 0.1787 | 16.88 | 50000 | 0.5935 | 0.0994 | | 0.1719 | 17.05 | 50500 | 0.5990 | 0.0987 | | 0.1697 | 17.22 | 51000 | 0.5917 | 0.0987 | | 0.1736 | 17.39 | 51500 | 0.5988 | 0.0988 | | 0.1695 | 17.56 | 52000 | 0.5988 | 0.0978 | | 0.1663 | 17.72 | 52500 | 0.6062 | 0.0979 | | 0.1621 | 17.89 | 53000 | 0.5993 | 0.0976 | | 0.1653 | 18.06 | 53500 | 0.6049 | 0.0973 | | 0.1639 | 18.23 | 54000 | 0.6169 | 0.0976 | | 0.1574 | 18.4 | 54500 | 0.6063 | 0.0973 | | 0.1557 | 18.57 | 55000 | 0.5953 | 0.0959 | | 0.1608 | 18.74 | 55500 | 0.5943 | 0.0963 | | 0.1621 | 18.91 | 56000 | 0.5966 | 0.0961 | | 0.1534 | 19.07 | 56500 | 0.6086 | 0.0961 | | 0.1441 | 19.24 | 57000 | 0.6128 | 0.0962 | | 0.169 | 19.41 | 57500 | 0.6053 | 0.0957 | | 0.1516 | 19.58 | 58000 | 0.6066 | 0.0960 | | 0.1474 | 19.75 | 58500 | 0.6080 | 0.0958 | | 0.1478 | 19.92 | 59000 | 0.6093 | 0.0958 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu118 - Datasets 2.16.1 - Tokenizers 0.15.0
Audiogen/agc-discrete
Audiogen
2024-02-15T22:56:43Z
24
2
transformers
[ "transformers", "safetensors", "agc", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-15T22:55:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jspr/miqurelian-120b
jspr
2024-02-15T22:52:10Z
10
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2203.05482", "base_model:152334H/miqu-1-70b-sf", "base_model:merge:152334H/miqu-1-70b-sf", "base_model:grimulkan/aurelian-v0.5-70b-rope8-32K-fp16", "base_model:merge:grimulkan/aurelian-v0.5-70b-rope8-32K-fp16", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-15T22:34:18Z
--- base_model: - 152334H/miqu-1-70b-sf - grimulkan/aurelian-v0.5-70b-rope8-32K-fp16 library_name: transformers tags: - mergekit - merge --- # miqurelian-120b This is a 120b merge created by interleaving layers of [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) with [Aurelian](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-fp16), a creative writing model, using [mergekit](https://github.com/cg123/mergekit). It performs approximtely SOTA for long-context creative writing tasks that require strong semantic coherence. ## Model Details - Max Context: 32768 tokens - Layers: 140 ### Prompt template ``` <s>[INST] {prompt} [/INST] ``` ### Merge Method This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method. ### Models Merged The following models were included in the merge: - [152334H/miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) - [grimulkan/aurelian-v0.5-70b-rope8-32K-fp16](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-fp16) ### Configuration The following YAML configuration was used to produce this model: <details><summary>mergekit_config.yml</summary> ```yaml merge_method: linear parameters: weight: 1.0 slices: - sources: - model: 152334H/miqu-1-70b-sf layer_range: [0, 1] - model: grimulkan/aurelian-v0.5-70b-rope8-32K-fp16 layer_range: [0, 1] parameters: weight: 0 - sources: - model: 152334H/miqu-1-70b-sf layer_range: [1, 20] - sources: - model: grimulkan/aurelian-v0.5-70b-rope8-32K-fp16 layer_range: [10, 30] - sources: - model: 152334H/miqu-1-70b-sf layer_range: [20, 40] - sources: - model: grimulkan/aurelian-v0.5-70b-rope8-32K-fp16 layer_range: [30, 50] - sources: - model: 152334H/miqu-1-70b-sf layer_range: [40, 60] - sources: - model: grimulkan/aurelian-v0.5-70b-rope8-32K-fp16 layer_range: [50, 70] - sources: - model: 152334H/miqu-1-70b-sf layer_range: [60, 79] - sources: - model: 152334H/miqu-1-70b-sf layer_range: [79, 80] - model: grimulkan/aurelian-v0.5-70b-rope8-32K-fp16 layer_range: [79, 80] parameters: weight: 0 dtype: float16 tokenizer_source: model:152334H/miqu-1-70b-sf ``` </details>
Eric111/AlphaMayo
Eric111
2024-02-15T22:36:53Z
3
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "Eric111/Mayo", "mlabonne/AlphaMonarch-7B", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-15T19:02:37Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - Eric111/Mayo - mlabonne/AlphaMonarch-7B --- Acknowledgements: https://github.com/mlabonne/llm-course # AlphaMayo AlphaMayo is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [Eric111/Mayo](https://huggingface.co/Eric111/Mayo) * [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B) ## 🧩 Configuration ```yaml slices: - sources: - model: Eric111/Mayo layer_range: [0, 32] - model: mlabonne/AlphaMonarch-7B layer_range: [0, 32] merge_method: slerp base_model: Eric111/Mayo parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
pjbhaumik/crossencoder-km1
pjbhaumik
2024-02-15T22:36:09Z
92
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:cross-encoder/stsb-TinyBERT-L-4", "base_model:finetune:cross-encoder/stsb-TinyBERT-L-4", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-13T16:25:41Z
--- license: apache-2.0 base_model: cross-encoder/stsb-TinyBERT-L-4 tags: - generated_from_trainer model-index: - name: crossencoder-km1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # crossencoder-km1 This model is a fine-tuned version of [cross-encoder/stsb-TinyBERT-L-4](https://huggingface.co/cross-encoder/stsb-TinyBERT-L-4) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0110 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 100 - eval_batch_size: 80 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 7.2478 | 1.0 | 20 | 6.6948 | | 3.8026 | 2.0 | 40 | 2.8643 | | 0.9993 | 3.0 | 60 | 0.8714 | | 0.2986 | 4.0 | 80 | 0.2379 | | 0.1161 | 5.0 | 100 | 0.0786 | | 0.0414 | 6.0 | 120 | 0.0461 | | 0.0218 | 7.0 | 140 | 0.0250 | | 0.0144 | 8.0 | 160 | 0.0140 | | 0.0101 | 9.0 | 180 | 0.0122 | | 0.0083 | 10.0 | 200 | 0.0120 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.0.1 - Datasets 2.17.0 - Tokenizers 0.15.2
aidonuts/forthright-smooch-141-s1000
aidonuts
2024-02-15T22:31:27Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-15T22:30:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
NilanE/karasu-translation-gguf
NilanE
2024-02-15T22:21:44Z
43
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-02-15T22:19:35Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf base_model: karasu-web --- # Uploaded model - **Developed by:** NilanE - **License:** apache-2.0 - **Finetuned from model :** karasu-web This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
dmusingu/phi2tokenizerv2
dmusingu
2024-02-15T22:19:36Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-15T22:19:33Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
erickrribeiro/ner_model
erickrribeiro
2024-02-15T22:19:35Z
94
0
transformers
[ "transformers", "safetensors", "bert", "token-classification", "generated_from_trainer", "dataset:__main__", "base_model:neuralmind/bert-base-portuguese-cased", "base_model:finetune:neuralmind/bert-base-portuguese-cased", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-02-08T20:46:03Z
--- license: mit base_model: neuralmind/bert-base-portuguese-cased tags: - generated_from_trainer datasets: - __main__ metrics: - precision - recall - f1 - accuracy model-index: - name: ner_model results: - task: name: Token Classification type: token-classification dataset: name: __main__ type: __main__ config: local split: test args: local metrics: - name: Precision type: precision value: 0.5783305117853887 - name: Recall type: recall value: 0.6134825252106645 - name: F1 type: f1 value: 0.5953881217321357 - name: Accuracy type: accuracy value: 0.7670984455958549 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ner_model This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on the __main__ dataset. It achieves the following results on the evaluation set: - Loss: 1.5136 - Precision: 0.5783 - Recall: 0.6135 - F1: 0.5954 - Accuracy: 0.7671 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.7447 | 1.0 | 5905 | 0.7678 | 0.4966 | 0.5209 | 0.5085 | 0.7409 | | 0.6153 | 2.0 | 11810 | 0.7378 | 0.5628 | 0.5600 | 0.5614 | 0.7624 | | 0.4623 | 3.0 | 17715 | 0.7959 | 0.5449 | 0.5836 | 0.5636 | 0.7573 | | 0.3629 | 4.0 | 23620 | 0.8921 | 0.5679 | 0.6017 | 0.5843 | 0.7631 | | 0.246 | 5.0 | 29525 | 1.0286 | 0.5878 | 0.5955 | 0.5916 | 0.7685 | | 0.1923 | 6.0 | 35430 | 1.2142 | 0.5926 | 0.5957 | 0.5941 | 0.7689 | | 0.1477 | 7.0 | 41335 | 1.3019 | 0.5681 | 0.6091 | 0.5879 | 0.7591 | | 0.1214 | 8.0 | 47240 | 1.4101 | 0.5834 | 0.6110 | 0.5969 | 0.7659 | | 0.0793 | 9.0 | 53145 | 1.4745 | 0.5848 | 0.6136 | 0.5989 | 0.7688 | | 0.0733 | 10.0 | 59050 | 1.5136 | 0.5783 | 0.6135 | 0.5954 | 0.7671 | ### Framework versions - Transformers 4.36.0 - Pytorch 2.0.1+cu117 - Datasets 2.14.4 - Tokenizers 0.15.0
Shijia/furina_seed42_eng_esp_hau_basic_5e-06
Shijia
2024-02-15T22:17:14Z
90
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:yihongLiu/furina", "base_model:finetune:yihongLiu/furina", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-15T22:16:28Z
--- base_model: yihongLiu/furina tags: - generated_from_trainer model-index: - name: furina_seed42_eng_esp_hau_basic_5e-06 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # furina_seed42_eng_esp_hau_basic_5e-06 This model is a fine-tuned version of [yihongLiu/furina](https://huggingface.co/yihongLiu/furina) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0249 - Spearman Corr: 0.7476 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 32 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Spearman Corr | |:-------------:|:-----:|:----:|:---------------:|:-------------:| | No log | 1.45 | 200 | 0.0579 | 0.1807 | | 0.1318 | 2.91 | 400 | 0.0365 | 0.5375 | | 0.0455 | 4.36 | 600 | 0.0280 | 0.6335 | | 0.0455 | 5.82 | 800 | 0.0251 | 0.6685 | | 0.0329 | 7.27 | 1000 | 0.0255 | 0.6937 | | 0.0273 | 8.73 | 1200 | 0.0238 | 0.7208 | | 0.0247 | 10.18 | 1400 | 0.0232 | 0.7297 | | 0.0247 | 11.64 | 1600 | 0.0238 | 0.7338 | | 0.0229 | 13.09 | 1800 | 0.0232 | 0.7352 | | 0.0214 | 14.55 | 2000 | 0.0237 | 0.7407 | | 0.0204 | 16.0 | 2200 | 0.0246 | 0.7432 | | 0.0204 | 17.45 | 2400 | 0.0253 | 0.7453 | | 0.0191 | 18.91 | 2600 | 0.0254 | 0.7465 | | 0.0181 | 20.36 | 2800 | 0.0256 | 0.7475 | | 0.0181 | 21.82 | 3000 | 0.0249 | 0.7476 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2