modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-26 00:41:36
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
496 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-26 00:41:32
card
stringlengths
11
1.01M
casque/0188_naked_ribbon_v1
casque
2024-05-28T12:48:26Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-05-28T12:47:29Z
--- license: creativeml-openrail-m ---
cyr19/gpt2-large-de-quatrain-conditioned
cyr19
2024-05-28T12:45:15Z
134
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-28T12:43:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kanishka/smolm-autoreg-bpe-counterfactual_babylm_anans_new-1e-3
kanishka
2024-05-28T12:44:20Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "opt", "text-generation", "generated_from_trainer", "dataset:kanishka/counterfactual_babylm_anans_new", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-26T22:36:43Z
--- tags: - generated_from_trainer datasets: - kanishka/counterfactual_babylm_anans_new metrics: - accuracy model-index: - name: smolm-autoreg-bpe-counterfactual_babylm_anans_new-1e-3 results: - task: name: Causal Language Modeling type: text-generation dataset: name: kanishka/counterfactual_babylm_anans_new type: kanishka/counterfactual_babylm_anans_new metrics: - name: Accuracy type: accuracy value: 0.40906300860483724 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smolm-autoreg-bpe-counterfactual_babylm_anans_new-1e-3 This model was trained from scratch on the kanishka/counterfactual_babylm_anans_new dataset. It achieves the following results on the evaluation set: - Loss: 3.4254 - Accuracy: 0.4091 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 32000 - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 3.6007 | 1.0 | 18595 | 3.7808 | 0.3583 | | 3.3831 | 2.0 | 37190 | 3.5820 | 0.3801 | | 3.2601 | 3.0 | 55785 | 3.4779 | 0.3914 | | 3.1745 | 4.0 | 74380 | 3.4456 | 0.3976 | | 3.1238 | 5.0 | 92975 | 3.3903 | 0.4007 | | 3.081 | 6.0 | 111570 | 3.3846 | 0.4037 | | 3.0438 | 7.0 | 130165 | 3.3775 | 0.4049 | | 3.0139 | 8.0 | 148760 | 3.3804 | 0.4060 | | 2.9849 | 9.0 | 167355 | 3.3752 | 0.4065 | | 2.9642 | 10.0 | 185950 | 3.3811 | 0.4078 | | 2.935 | 11.0 | 204545 | 3.3705 | 0.4076 | | 2.9128 | 12.0 | 223140 | 3.3703 | 0.4087 | | 2.8963 | 13.0 | 241735 | 3.3833 | 0.4084 | | 2.8702 | 14.0 | 260330 | 3.3925 | 0.4090 | | 2.8516 | 15.0 | 278925 | 3.3907 | 0.4092 | | 2.8317 | 16.0 | 297520 | 3.3914 | 0.4094 | | 2.8121 | 17.0 | 316115 | 3.4075 | 0.4087 | | 2.7963 | 18.0 | 334710 | 3.4124 | 0.4091 | | 2.7801 | 19.0 | 353305 | 3.4181 | 0.4091 | | 2.7649 | 20.0 | 371900 | 3.4254 | 0.4091 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.19.1
cyr19/gpt2-large-de-quatrain
cyr19
2024-05-28T12:42:41Z
134
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-28T12:41:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ibrahimkettaneh/llama-3-cat-8b-instruct-psychotherapist-SLERP-zero-v1
ibrahimkettaneh
2024-05-28T12:39:40Z
6
3
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "base_model:SteelStorage/llama-3-cat-8b-instruct-v1", "base_model:merge:SteelStorage/llama-3-cat-8b-instruct-v1", "base_model:ibrahimkettaneh/llama-3-8B-chat-psychotherapist-merged", "base_model:merge:ibrahimkettaneh/llama-3-8B-chat-psychotherapist-merged", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-28T12:23:06Z
--- base_model: - ibrahimkettaneh/llama-3-8B-chat-psychotherapist-merged - TheSkullery/llama-3-cat-8b-instruct-v1 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [ibrahimkettaneh/llama-3-8B-chat-psychotherapist-merged](https://huggingface.co/ibrahimkettaneh/llama-3-8B-chat-psychotherapist-merged) * [TheSkullery/llama-3-cat-8b-instruct-v1](https://huggingface.co/TheSkullery/llama-3-cat-8b-instruct-v1) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: TheSkullery/llama-3-cat-8b-instruct-v1 layer_range: - 0 - 32 - model: ibrahimkettaneh/llama-3-8B-chat-psychotherapist-merged layer_range: - 0 - 32 merge_method: slerp base_model: TheSkullery/llama-3-cat-8b-instruct-v1 parameters: t: - filter: self_attn value: - 0 - 0.5 - 0.3 - 0.7 - 1 - filter: mlp value: - 1 - 0.5 - 0.7 - 0.3 - 0 - value: 0.5 dtype: bfloat16 ```
Ankur87/Llama2_Time_series_forecasting_2.0
Ankur87
2024-05-28T12:33:38Z
137
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-28T12:30:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ryan0712/llama-3-8b-slow-DUS-random-layer-method2
ryan0712
2024-05-28T12:31:03Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "ryan0712/llama-3-8b-slow-DUS-random-layer1-method2", "ryan0712/llama-3-8b-slow-DUS-random-layer2-method2", "base_model:ryan0712/llama-3-8b-slow-DUS-random-layer1-method2", "base_model:merge:ryan0712/llama-3-8b-slow-DUS-random-layer1-method2", "base_model:ryan0712/llama-3-8b-slow-DUS-random-layer2-method2", "base_model:merge:ryan0712/llama-3-8b-slow-DUS-random-layer2-method2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-28T12:30:04Z
--- tags: - merge - mergekit - lazymergekit - ryan0712/llama-3-8b-slow-DUS-random-layer1-method2 - ryan0712/llama-3-8b-slow-DUS-random-layer2-method2 base_model: - ryan0712/llama-3-8b-slow-DUS-random-layer1-method2 - ryan0712/llama-3-8b-slow-DUS-random-layer2-method2 --- # llama-3-8b-slow-DUS-random-layer-method2 llama-3-8b-slow-DUS-random-layer-method2 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [ryan0712/llama-3-8b-slow-DUS-random-layer1-method2](https://huggingface.co/ryan0712/llama-3-8b-slow-DUS-random-layer1-method2) * [ryan0712/llama-3-8b-slow-DUS-random-layer2-method2](https://huggingface.co/ryan0712/llama-3-8b-slow-DUS-random-layer2-method2) ## 🧩 Configuration ```yaml slices: - sources: - model: ryan0712/llama-3-8b-slow-DUS-random-layer1-method2 layer_range: [0, 16] - model: ryan0712/llama-3-8b-slow-DUS-random-layer2-method2 layer_range: [0, 16] merge_method: slerp base_model: ryan0712/llama-3-8b-slow-DUS-random-layer1-method2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "ryan0712/llama-3-8b-slow-DUS-random-layer-method2" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
datek/Qwen-Qwen1.5-1.8B-1716899027
datek
2024-05-28T12:25:34Z
135
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-28T12:23:48Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
GeorgeDaDude/jb_sytem_bin_judge_base_qa_wdo
GeorgeDaDude
2024-05-28T12:24:30Z
180
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/roberta-base", "base_model:finetune:FacebookAI/roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T11:27:12Z
--- license: mit base_model: roberta-base tags: - generated_from_trainer metrics: - accuracy - recall - precision - f1 model-index: - name: jb_sytem_bin_judge_base_qa_wdo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # jb_sytem_bin_judge_base_qa_wdo This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6013 - Accuracy: 0.7910 - Recall: 0.9244 - Precision: 0.6854 - F1: 0.7871 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | Precision | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.7797 | 1.0 | 1708 | 0.5605 | 0.7498 | 0.4643 | 0.8805 | 0.6080 | | 0.662 | 2.0 | 3416 | 0.6802 | 0.5821 | 0.0 | 0.0 | 0.0 | | 0.7345 | 3.0 | 5124 | 0.6811 | 0.5821 | 0.0 | 0.0 | 0.0 | | 0.6475 | 4.0 | 6832 | 0.6817 | 0.5821 | 0.0 | 0.0 | 0.0 | | 0.2504 | 5.0 | 8540 | 0.6013 | 0.7910 | 0.9244 | 0.6854 | 0.7871 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
waddledee/three-line-summarization-ja
waddledee
2024-05-28T12:22:15Z
7
0
peft
[ "peft", "ja", "dataset:waddledee/three_line_summarization_for_japanese_news_articles", "region:us" ]
null
2024-04-15T06:08:42Z
--- library_name: peft datasets: - waddledee/three_line_summarization_for_japanese_news_articles language: - ja --- "elyza/ELYZA-japanese-Llama-2-7b-instruct"をベースモデルとして、3行要約タスクでLoRAチューニングしたモデルです。 ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig from peft import PeftModel, PeftConfig import torch peft_model_id = "waddledee/three-line-summarization-ja" config = PeftConfig.from_pretrained(peft_model_id) model_name = config.base_model_name_or_path # this is the base model name tokenizer = AutoTokenizer.from_pretrained(peft_model_id) bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.float16, ) # Load the base model in 4bit model_4bit = AutoModelForCausalLM.from_pretrained( model_name, quantization_config=bnb_config, trust_remote_code=True ) model_4bit.config.use_cache = False # this will make new learnable parameters for specialized tokens model_4bit.resize_token_embeddings(len(tokenizer)) model_from_hub = PeftModel.from_pretrained( model_4bit, peft_model_id, torch_dtype=torch.float16, device_map={'':0} ) def gen(text, model): prompt = f"""<s>[INST] <<SYS>> あなたは誠実で優秀な日本人のアシスタントです。 <</SYS>> 以下の入力文を3行で要約しなさい。 入力文: {text} [/INST] [R_START] """ token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt", truncation=True, max_length=4096) token_ids.to("cuda") with torch.no_grad(): output_ids =model.generate( inputs = token_ids, do_sample=False, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, max_new_tokens=256 ) output = tokenizer.decode(output_ids.tolist()[0][token_ids.size(1) :], skip_special_tokens=True) return(output) text = "2013年に「一帯一路」は、中国が世界経済の中心的地位を占めていた次代の古代シルクロードの再現を意識したものとされ、陸上と海上の双方において中国と中央アジア、欧州までを結ぶ構想だ。中国メディアの中国網はこのほど、一帯一路構想の実現に向け、中国は日本とどのように対峙すべきかを論じる記事を掲載。中国社科院世界経済政治研究所の研究員の分析として、日本は一帯一路構想の実現における競合として中国の前に立ちはだかると主張した。日本が中国の競合となると主張した1つ目の理由は、「シルクロード文化に最も興味を示しているのは日本である」ことだという。日本にはシルクロードを題材にした小説やドキュメンタリーが多く、シルクロードに対する熱意は中国をも凌ぐゆえだ。確かに日本ではシルクロードを題材とした小説などは多いが、これは納得できない理由だ。記事が挙げた2つ目の理由は「冷戦後、もっとも早くシルクロードに商機を見出したのが日本」であることだという。日本は1997年に当時の橋本総理が「対シルクロード地域外交」を打ち出したが、これはどの国よりも早くシルクロードの重要性に注目した結果であると指摘した。3つ目の点は「中国に対して、もっとも競争力を有しているのが日本である」ことで、日本が中国主導のアジアインフラ投資銀行(AIIB)に対抗して、1100億ドルのインフラ投資をアジアで行う方針を打ち出したことを指摘。また、日本は一帯一路構想に対する「破壊力」も有しているうえ、日本は経済面、政治外交面、軍事面で「もっとも中国に対して懐疑的」であることから、中国の一帯一路構想について、日本が何らかの形で対抗策を打ち出してくるのではないかと警戒感を示した。" gen(text,model_from_hub) >> '3行要約:\n中国メディアが日本が一帯一路構想に対抗するのかを論じた\n日本がシルクロード文化に最も興味を示していると指摘\n日本は経済面、政治外交面、軍事面で中国に対して懐疑的 ' ``` ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0.dev0
mradermacher/Alpha-Ophiuchi-mini-128k-v0.1-GGUF
mradermacher
2024-05-28T12:21:52Z
15
1
transformers
[ "transformers", "gguf", "en", "dataset:NobodyExistsOnTheInternet/ToxicQAFinal", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-28T06:50:06Z
--- base_model: fearlessdots/Alpha-Ophiuchi-mini-128k-v0.1 datasets: - NobodyExistsOnTheInternet/ToxicQAFinal language: - en library_name: transformers license: mit quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/fearlessdots/Alpha-Ophiuchi-mini-128k-v0.1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Alpha-Ophiuchi-mini-128k-v0.1-GGUF/resolve/main/Alpha-Ophiuchi-mini-128k-v0.1.Q2_K.gguf) | Q2_K | 1.5 | | | [GGUF](https://huggingface.co/mradermacher/Alpha-Ophiuchi-mini-128k-v0.1-GGUF/resolve/main/Alpha-Ophiuchi-mini-128k-v0.1.IQ3_XS.gguf) | IQ3_XS | 1.7 | | | [GGUF](https://huggingface.co/mradermacher/Alpha-Ophiuchi-mini-128k-v0.1-GGUF/resolve/main/Alpha-Ophiuchi-mini-128k-v0.1.IQ3_S.gguf) | IQ3_S | 1.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Alpha-Ophiuchi-mini-128k-v0.1-GGUF/resolve/main/Alpha-Ophiuchi-mini-128k-v0.1.Q3_K_S.gguf) | Q3_K_S | 1.8 | | | [GGUF](https://huggingface.co/mradermacher/Alpha-Ophiuchi-mini-128k-v0.1-GGUF/resolve/main/Alpha-Ophiuchi-mini-128k-v0.1.IQ3_M.gguf) | IQ3_M | 2.0 | | | [GGUF](https://huggingface.co/mradermacher/Alpha-Ophiuchi-mini-128k-v0.1-GGUF/resolve/main/Alpha-Ophiuchi-mini-128k-v0.1.Q3_K_M.gguf) | Q3_K_M | 2.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Alpha-Ophiuchi-mini-128k-v0.1-GGUF/resolve/main/Alpha-Ophiuchi-mini-128k-v0.1.IQ4_XS.gguf) | IQ4_XS | 2.2 | | | [GGUF](https://huggingface.co/mradermacher/Alpha-Ophiuchi-mini-128k-v0.1-GGUF/resolve/main/Alpha-Ophiuchi-mini-128k-v0.1.Q3_K_L.gguf) | Q3_K_L | 2.2 | | | [GGUF](https://huggingface.co/mradermacher/Alpha-Ophiuchi-mini-128k-v0.1-GGUF/resolve/main/Alpha-Ophiuchi-mini-128k-v0.1.Q4_K_S.gguf) | Q4_K_S | 2.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Alpha-Ophiuchi-mini-128k-v0.1-GGUF/resolve/main/Alpha-Ophiuchi-mini-128k-v0.1.Q4_K_M.gguf) | Q4_K_M | 2.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Alpha-Ophiuchi-mini-128k-v0.1-GGUF/resolve/main/Alpha-Ophiuchi-mini-128k-v0.1.Q5_K_S.gguf) | Q5_K_S | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/Alpha-Ophiuchi-mini-128k-v0.1-GGUF/resolve/main/Alpha-Ophiuchi-mini-128k-v0.1.Q5_K_M.gguf) | Q5_K_M | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Alpha-Ophiuchi-mini-128k-v0.1-GGUF/resolve/main/Alpha-Ophiuchi-mini-128k-v0.1.Q6_K.gguf) | Q6_K | 3.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Alpha-Ophiuchi-mini-128k-v0.1-GGUF/resolve/main/Alpha-Ophiuchi-mini-128k-v0.1.Q8_0.gguf) | Q8_0 | 4.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Alpha-Ophiuchi-mini-128k-v0.1-GGUF/resolve/main/Alpha-Ophiuchi-mini-128k-v0.1.f16.gguf) | f16 | 7.7 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
John6666/comradeship-xl-v9-sdxl
John6666
2024-05-28T12:20:52Z
58
2
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-05-28T12:13:25Z
--- license: other tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime --- Original model is [here](https://civitai.com/models/246299?modelVersionId=502629).
sddcresearch/phi-3-vi-sft-1
sddcresearch
2024-05-28T12:09:51Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:microsoft/Phi-3-mini-4k-instruct", "base_model:adapter:microsoft/Phi-3-mini-4k-instruct", "license:mit", "region:us" ]
null
2024-05-28T12:05:17Z
--- license: mit library_name: peft tags: - trl - sft - generated_from_trainer base_model: microsoft/Phi-3-mini-4k-instruct model-index: - name: phi-3-vi-sft-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/sddc_research/huggingface/runs/f09h3br3) # phi-3-vi-sft-1 This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 4 - seed: 3407 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.11.2.dev0 - Transformers 4.42.0.dev0 - Pytorch 2.3.0+cu121 - Datasets 2.16.0 - Tokenizers 0.19.1
Netta1994/setfit_baai_2k
Netta1994
2024-05-28T12:09:14Z
7
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2024-05-28T12:08:44Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # Netta1994/setfit_baai_2k This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("Netta1994/setfit_baai_2k") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
girayo/a2c-PandaReachDense-v3
girayo
2024-05-28T12:06:25Z
0
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-05-28T12:00:45Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v3 type: PandaReachDense-v3 metrics: - type: mean_reward value: -0.28 +/- 0.12 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v3** This is a trained model of a **A2C** agent playing **PandaReachDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
AkhilTolani/vocals-english
AkhilTolani
2024-05-28T12:03:55Z
49
0
transformers
[ "transformers", "safetensors", "parler_tts", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-28T12:02:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
xxx777xxxASD/L3_SnowStorm_4x8B
xxx777xxxASD
2024-05-28T12:02:45Z
17
11
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "conversational", "en", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-20T10:31:44Z
--- license: llama3 tags: - moe language: - en --- <style> .image-container { position: relative; display: inline-block; } .image-container img { display: block; border-radius: 10px; box-shadow: 0 0 1px rgba(0, 0, 0, 0.3); } .image-container::before { content: ""; position: absolute; top: 0px; left: 20px; width: calc(100% - 40px); height: calc(100%); background-image: url("https://cdn-uploads.huggingface.co/production/uploads/64f5e51289c121cb864ba464/OuMe79ZQPdCX01rTdfgXn.png"); background-size: cover; filter: blur(10px); z-index: -1; } </style> <br> <div class="image-container"> <img src="https://cdn-uploads.huggingface.co/production/uploads/64f5e51289c121cb864ba464/OuMe79ZQPdCX01rTdfgXn.png" style="width: 96%; margin: auto;" > </div> (Maybe i'll change the waifu picture later) > [!NOTE] > [GGUF/Exl2 quants](https://huggingface.co/collections/xxx777xxxASD/snowstorm-4x8b-664b52a1d2a12e515efb5680) > [!NOTE] > Check for [v1.15A](https://huggingface.co/xxx777xxxASD/L3-SnowStorm-v1.15-4x8B-A) and [v1.15B](https://huggingface.co/xxx777xxxASD/L3-SnowStorm-v1.15-4x8B-B) Experimental RP-oriented MoE, the idea was to get a model that would be equal to or better than Mixtral 8x7B and it's finetunes in RP/ERP tasks. ### Llama 3 SnowStorm v1.0 4x8B ``` base_model: NeverSleep_Llama-3-Lumimaid-8B-v0.1-OAS gate_mode: random dtype: bfloat16 experts_per_token: 2 experts: - source_model: ChaoticNeutrals_Poppy_Porpoise-v0.7-L3-8B - source_model: NeverSleep_Llama-3-Lumimaid-8B-v0.1-OAS - source_model: openlynn_Llama-3-Soliloquy-8B-v2 - source_model: Sao10K_L3-8B-Stheno-v3.1 ``` ## Models used - [ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B) - [NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS) - [openlynn/Llama-3-Soliloquy-8B-v2](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B-v2) - [Sao10K/L3-8B-Stheno-v3.1](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.1) ## Difference(from ChaoticSoliloquy v1.5) - Update from [NeverSleep/Llama-3-Lumimaid-8B-v0.1](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1) to [NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS) - Update from [openlynn/Llama-3-Soliloquy-8B-v1](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B-v1) to [openlynn/Llama-3-Soliloquy-8B-v2](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B-v2) - Update from [Sao10K/L3-Solana-8B-v1](https://huggingface.co/Sao10K/L3-Solana-8B-v1) to [Sao10K/L3-8B-Stheno-v3.1](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.1) ## Vision [llama3_mmproj](https://huggingface.co/ChaoticNeutrals/LLaVA-Llama-3-8B-mmproj-Updated) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f5e51289c121cb864ba464/yv4C6NalqORLjvY3KKZk8.png) ## Prompt format: Llama 3
juman48/distilbert_beekeeping_QandA_model
juman48
2024-05-28T12:01:59Z
22
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "question-answering", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2024-05-12T14:33:31Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert_beekeeping_QandA_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_beekeeping_QandA_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1106 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 54 | 0.4174 | | No log | 2.0 | 108 | 0.1275 | | No log | 3.0 | 162 | 0.1516 | | No log | 4.0 | 216 | 0.1016 | | No log | 5.0 | 270 | 0.1128 | | No log | 6.0 | 324 | 0.1058 | | No log | 7.0 | 378 | 0.0903 | | No log | 8.0 | 432 | 0.1027 | | No log | 9.0 | 486 | 0.1285 | | 0.369 | 10.0 | 540 | 0.1211 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
philz1337x/upscaler
philz1337x
2024-05-28T11:59:08Z
0
4
null
[ "region:us" ]
null
2023-12-13T08:11:09Z
Upscaler i am using for Clarity Upscaler App: https://ClarityAI.co API: https://replicate.com/philz1337x/clarity-upscaler/ Github: http://github.com/philz1337x/clarity-upscaler
Ransss/llama-3-Daredevil-Mahou-8B-Q8_0-GGUF
Ransss
2024-05-28T11:58:51Z
2
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:flammenai/Mahou-1.1-llama3-8B", "base_model:merge:flammenai/Mahou-1.1-llama3-8B", "base_model:flammenai/Mahou-1.2a-llama3-8B", "base_model:merge:flammenai/Mahou-1.2a-llama3-8B", "base_model:mlabonne/Daredevil-8B-abliterated", "base_model:merge:mlabonne/Daredevil-8B-abliterated", "license:llama3", "endpoints_compatible", "region:us" ]
null
2024-05-28T11:58:28Z
--- license: llama3 library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo base_model: - mlabonne/Daredevil-8B-abliterated - flammenai/Mahou-1.2a-llama3-8B - flammenai/Mahou-1.1-llama3-8B --- # Ransss/llama-3-Daredevil-Mahou-8B-Q8_0-GGUF This model was converted to GGUF format from [`nbeerbower/llama-3-Daredevil-Mahou-8B`](https://huggingface.co/nbeerbower/llama-3-Daredevil-Mahou-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/nbeerbower/llama-3-Daredevil-Mahou-8B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo Ransss/llama-3-Daredevil-Mahou-8B-Q8_0-GGUF --model llama-3-daredevil-mahou-8b-q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo Ransss/llama-3-Daredevil-Mahou-8B-Q8_0-GGUF --model llama-3-daredevil-mahou-8b-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && \ cd llama.cpp && \ make && \ ./main -m llama-3-daredevil-mahou-8b-q8_0.gguf -n 128 ```
Likich/gemmainstruct-finetune-qualcoding_1000_prompt1
Likich
2024-05-28T11:57:36Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-28T11:57:20Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lgk03/WITHINAPPS_NDD-mrbs_test-content_tags
lgk03
2024-05-28T11:56:52Z
106
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T11:47:53Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: WITHINAPPS_NDD-mrbs_test-content_tags results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # WITHINAPPS_NDD-mrbs_test-content_tags This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0294 - Accuracy: 0.9943 - F1: 0.9943 - Precision: 0.9943 - Recall: 0.9943 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | No log | 1.0 | 71 | 0.0370 | 0.9943 | 0.9943 | 0.9943 | 0.9943 | | No log | 2.0 | 142 | 0.0294 | 0.9943 | 0.9943 | 0.9943 | 0.9943 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
joshnader/Phi-3-mini-4k-instruct-Q4_K_M-GGUF
joshnader
2024-05-28T11:56:49Z
0
0
null
[ "gguf", "nlp", "code", "llama-cpp", "gguf-my-repo", "text-generation", "en", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-05-28T11:56:42Z
--- language: - en license: mit tags: - nlp - code - llama-cpp - gguf-my-repo license_link: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE pipeline_tag: text-generation inference: parameters: temperature: 0.0 widget: - messages: - role: user content: Can you provide ways to eat combinations of bananas and dragonfruits? --- # joshnader/Phi-3-mini-4k-instruct-Q4_K_M-GGUF This model was converted to GGUF format from [`microsoft/Phi-3-mini-4k-instruct`](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo joshnader/Phi-3-mini-4k-instruct-Q4_K_M-GGUF --model phi-3-mini-4k-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo joshnader/Phi-3-mini-4k-instruct-Q4_K_M-GGUF --model phi-3-mini-4k-instruct-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && \ cd llama.cpp && \ make && \ ./main -m phi-3-mini-4k-instruct-q4_k_m.gguf -n 128 ```
dan713z/lunder_lander
dan713z
2024-05-28T11:56:47Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-05-28T11:56:30Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: ' PPO' results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 254.71 +/- 16.26 name: mean_reward verified: false --- # ** PPO** Agent playing **LunarLander-v2** This is a trained model of a ** PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
fabisor/my_awesome_model
fabisor
2024-05-28T11:55:06Z
106
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-21T16:09:42Z
--- license: apache-2.0 base_model: distilbert/distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: my_awesome_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_model This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2327 - Accuracy: 0.931 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.224 | 1.0 | 1563 | 0.2010 | 0.9222 | | 0.1464 | 2.0 | 3126 | 0.2327 | 0.931 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
CentralogicAITeam/TX_OneFourFamilyPage8_DemoModel_v01
CentralogicAITeam
2024-05-28T11:51:32Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-28T11:37:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
John6666/tri-fusion-v1e-sdxl
John6666
2024-05-28T11:44:42Z
43
1
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-05-28T11:38:20Z
--- license: other tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime --- Original model is [here](https://civitai.com/models/475023?modelVersionId=532202).
fhnw/Llama-3-8B-pineapple-pizza-orpo-gguf
fhnw
2024-05-28T11:41:40Z
3
0
null
[ "gguf", "GGUF", "base_model:fhnw/Llama-3-8B-pineapple-pizza-orpo", "base_model:quantized:fhnw/Llama-3-8B-pineapple-pizza-orpo", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-28T11:37:11Z
--- tags: - GGUF base_model: - fhnw/Llama-3-8B-pineapple-pizza-orpo --- # Llama-3-8B-pineapple-pizza-orpo-gguf Llama-3-8B-pineapple-pizza-orpo-gguf is GGUF made from the following model: * [fhnw/Llama-3-8B-pineapple-pizza-orpo](https://huggingface.co/fhnw/Llama-3-8B-pineapple-pizza-orpo)
saumyax/multinews_model
saumyax
2024-05-28T11:40:29Z
103
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "dataset:multi_news", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-10-01T12:07:20Z
--- license: apache-2.0 base_model: t5-small tags: - generated_from_trainer datasets: - multi_news metrics: - rouge model-index: - name: multinews_model results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: multi_news type: multi_news config: default split: test args: default metrics: - name: Rouge1 type: rouge value: 0.1482 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # multinews_model This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the multi_news dataset. It achieves the following results on the evaluation set: - Loss: 2.7165 - Rouge1: 0.1482 - Rouge2: 0.0472 - Rougel: 0.1132 - Rougelsum: 0.1132 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 12 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 450 | 2.8616 | 0.1388 | 0.0418 | 0.1057 | 0.1056 | 19.0 | | 3.2544 | 2.0 | 900 | 2.7991 | 0.1427 | 0.0438 | 0.1089 | 0.1089 | 19.0 | | 2.999 | 3.0 | 1350 | 2.7693 | 0.1449 | 0.046 | 0.1115 | 0.1114 | 19.0 | | 2.958 | 4.0 | 1800 | 2.7531 | 0.1466 | 0.0462 | 0.112 | 0.1118 | 19.0 | | 2.9198 | 5.0 | 2250 | 2.7431 | 0.1466 | 0.0465 | 0.112 | 0.1119 | 19.0 | | 2.8838 | 6.0 | 2700 | 2.7328 | 0.1474 | 0.0461 | 0.1125 | 0.1123 | 19.0 | | 2.8774 | 7.0 | 3150 | 2.7270 | 0.1477 | 0.0463 | 0.1126 | 0.1124 | 19.0 | | 2.8712 | 8.0 | 3600 | 2.7226 | 0.148 | 0.0466 | 0.1128 | 0.1127 | 19.0 | | 2.854 | 9.0 | 4050 | 2.7197 | 0.1479 | 0.047 | 0.1129 | 0.1128 | 19.0 | | 2.8541 | 10.0 | 4500 | 2.7188 | 0.1485 | 0.0471 | 0.113 | 0.1129 | 19.0 | | 2.8541 | 11.0 | 4950 | 2.7168 | 0.1483 | 0.0472 | 0.1131 | 0.1131 | 19.0 | | 2.8466 | 12.0 | 5400 | 2.7165 | 0.1482 | 0.0472 | 0.1132 | 0.1132 | 19.0 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
triksblade/poca-SoccerTwos
triksblade
2024-05-28T11:39:55Z
11
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2024-05-21T12:50:46Z
--- library_name: ml-agents tags: - SoccerTwos - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos --- # **poca** Agent playing **SoccerTwos** Dibuat oleh Ilham Yahya untuk memenuhi tugas mata kuliah Sistem Multi Agen This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: triksblade/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
lgk03/WITHINAPPS_NDD-ppma_test-content_tags
lgk03
2024-05-28T11:37:24Z
122
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T11:33:02Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: WITHINAPPS_NDD-ppma_test-content_tags results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # WITHINAPPS_NDD-ppma_test-content_tags This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2207 - Accuracy: 0.8795 - F1: 0.8231 - Precision: 0.7735 - Recall: 0.8795 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | No log | 0.9836 | 30 | 0.2348 | 0.8795 | 0.8231 | 0.7735 | 0.8795 | | No log | 1.9672 | 60 | 0.2207 | 0.8795 | 0.8231 | 0.7735 | 0.8795 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
smerchi/generated_whisper_test1
smerchi
2024-05-28T11:37:13Z
89
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-large-v3", "base_model:finetune:openai/whisper-large-v3", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-28T10:55:43Z
--- license: apache-2.0 base_model: openai/whisper-large-v3 tags: - generated_from_trainer model-index: - name: generated_whisper_test1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # generated_whisper_test1 This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.0.1+cu117 - Datasets 2.16.1 - Tokenizers 0.15.1
joshnader/Phi-3-medium-128k-instruct-Q4_K_M-GGUF
joshnader
2024-05-28T11:31:54Z
0
0
null
[ "gguf", "nlp", "code", "llama-cpp", "gguf-my-repo", "text-generation", "multilingual", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-05-28T11:31:30Z
--- language: - multilingual license: mit tags: - nlp - code - llama-cpp - gguf-my-repo license_link: https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/resolve/main/LICENSE pipeline_tag: text-generation inference: parameters: temperature: 0.7 widget: - messages: - role: user content: Can you provide ways to eat combinations of bananas and dragonfruits? --- # joshnader/Phi-3-medium-128k-instruct-Q4_K_M-GGUF This model was converted to GGUF format from [`microsoft/Phi-3-medium-128k-instruct`](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo joshnader/Phi-3-medium-128k-instruct-Q4_K_M-GGUF --model phi-3-medium-128k-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo joshnader/Phi-3-medium-128k-instruct-Q4_K_M-GGUF --model phi-3-medium-128k-instruct-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && \ cd llama.cpp && \ make && \ ./main -m phi-3-medium-128k-instruct-q4_k_m.gguf -n 128 ```
bartowski/internlm2-math-plus-mixtral8x22b-GGUF
bartowski
2024-05-28T11:29:29Z
102
1
null
[ "gguf", "math", "text-generation", "en", "zh", "license:other", "endpoints_compatible", "region:us", "imatrix" ]
text-generation
2024-05-28T06:04:27Z
--- pipeline_tag: text-generation license: other language: - en - zh tags: - math quantized_by: bartowski --- ## Llamacpp imatrix Quantizations of internlm2-math-plus-mixtral8x22b Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3001">b3001</a> for quantization. Original model: https://huggingface.co/internlm/internlm2-math-plus-mixtral8x22b All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) ## Prompt format No chat template specified so default is used. This may be incorrect, check original model card for details. ``` <s> [INST] <<SYS>> {system_prompt} <</SYS>> {prompt} [/INST] </s> ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [internlm2-math-plus-mixtral8x22b-Q6_K.gguf](https://huggingface.co/bartowski/internlm2-math-plus-mixtral8x22b-GGUF/tree/main/internlm2-math-plus-mixtral8x22b-Q6_K.gguf) | Q6_K | 115.53GB | Very high quality, near perfect, *recommended*. | | [internlm2-math-plus-mixtral8x22b-Q5_K_M.gguf](https://huggingface.co/bartowski/internlm2-math-plus-mixtral8x22b-GGUF/tree/main/internlm2-math-plus-mixtral8x22b-Q5_K_M.gguf) | Q5_K_M | 99.97GB | High quality, *recommended*. | | [internlm2-math-plus-mixtral8x22b-Q4_K_M.gguf](https://huggingface.co/bartowski/internlm2-math-plus-mixtral8x22b-GGUF/tree/main/internlm2-math-plus-mixtral8x22b-Q4_K_M.gguf) | Q4_K_M | 85.59GB | Good quality, uses about 4.83 bits per weight, *recommended*. | | [internlm2-math-plus-mixtral8x22b-Q4_K_S.gguf](https://huggingface.co/bartowski/internlm2-math-plus-mixtral8x22b-GGUF/tree/main/internlm2-math-plus-mixtral8x22b-Q4_K_S.gguf) | Q4_K_S | 80.48GB | Slightly lower quality with more space savings, *recommended*. | | [internlm2-math-plus-mixtral8x22b-IQ4_XS.gguf](https://huggingface.co/bartowski/internlm2-math-plus-mixtral8x22b-GGUF/tree/main/internlm2-math-plus-mixtral8x22b-IQ4_XS.gguf) | IQ4_XS | 75.47GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [internlm2-math-plus-mixtral8x22b-Q3_K_M.gguf](https://huggingface.co/bartowski/internlm2-math-plus-mixtral8x22b-GGUF/tree/main/internlm2-math-plus-mixtral8x22b-Q3_K_M.gguf) | Q3_K_M | 67.79GB | Even lower quality. | | [internlm2-math-plus-mixtral8x22b-IQ3_M.gguf](https://huggingface.co/bartowski/internlm2-math-plus-mixtral8x22b-GGUF/tree/main/internlm2-math-plus-mixtral8x22b-IQ3_M.gguf) | IQ3_M | 64.49GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [internlm2-math-plus-mixtral8x22b-IQ3_XS.gguf](https://huggingface.co/bartowski/internlm2-math-plus-mixtral8x22b-GGUF/tree/main/internlm2-math-plus-mixtral8x22b-IQ3_XS.gguf) | IQ3_XS | 58.23GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [internlm2-math-plus-mixtral8x22b-Q2_K.gguf](https://huggingface.co/bartowski/internlm2-math-plus-mixtral8x22b-GGUF/tree/main/internlm2-math-plus-mixtral8x22b-Q2_K.gguf) | Q2_K | 52.10GB | Very low quality but surprisingly usable. | | [internlm2-math-plus-mixtral8x22b-IQ2_M.gguf](https://huggingface.co/bartowski/internlm2-math-plus-mixtral8x22b-GGUF/blob/main/internlm2-math-plus-mixtral8x22b-IQ2_M.gguf) | IQ2_M | 46.71GB | Very low quality, uses SOTA techniques to also be surprisingly usable. | | [internlm2-math-plus-mixtral8x22b-IQ2_S.gguf](https://huggingface.co/bartowski/internlm2-math-plus-mixtral8x22b-GGUF/blob/main/internlm2-math-plus-mixtral8x22b-IQ2_S.gguf) | IQ2_S | 42.59GB | Very low quality, uses SOTA techniques to be usable. | | [internlm2-math-plus-mixtral8x22b-IQ2_XXS.gguf](https://huggingface.co/bartowski/internlm2-math-plus-mixtral8x22b-GGUF/blob/main/internlm2-math-plus-mixtral8x22b-IQ2_XXS.gguf) | IQ2_XXS | 37.88GB | Lower quality, uses SOTA techniques to be usable. | | [internlm2-math-plus-mixtral8x22b-IQ1_M.gguf](https://huggingface.co/bartowski/internlm2-math-plus-mixtral8x22b-GGUF/blob/main/internlm2-math-plus-mixtral8x22b-IQ1_M.gguf) | IQ1_M | 32.73GB | Extremely low quality, *not* recommended. | ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/internlm2-math-plus-mixtral8x22b-GGUF --include "internlm2-math-plus-mixtral8x22b-Q4_K_M.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/internlm2-math-plus-mixtral8x22b-GGUF --include "internlm2-math-plus-mixtral8x22b-Q8_0.gguf/*" --local-dir internlm2-math-plus-mixtral8x22b-Q8_0 ``` You can either specify a new local-dir (internlm2-math-plus-mixtral8x22b-Q8_0) or download them all in place (./) ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
cjsanjay/llama-3-8B-gorilla-opencve_v1
cjsanjay
2024-05-28T11:27:02Z
21
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-28T11:23:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
flpelerin/TinyLlama-1.1b-slimorca-10k
flpelerin
2024-05-28T11:26:22Z
134
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-28T11:20:48Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Unbabel/wmt21-comet-qe-mqm-marian
Unbabel
2024-05-28T11:26:15Z
0
1
null
[ "translation", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "om", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "uz", "vi", "xh", "yi", "zh", "license:apache-2.0", "region:us" ]
translation
2024-05-28T11:24:45Z
--- pipeline_tag: translation language: - multilingual - af - am - ar - as - az - be - bg - bn - br - bs - ca - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - hu - hy - id - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - 'no' - om - or - pa - pl - ps - pt - ro - ru - sa - sd - si - sk - sl - so - sq - sr - su - sv - sw - ta - te - th - tl - tr - ug - uk - ur - uz - vi - xh - yi - zh license: apache-2.0 --- Marian version of [wmt21-comet-qe-mqm-marian](https://huggingface.co/Unbabel/wmt21-comet-qe-mqm-marian). Credits to Microsoft Translate Team! # Paper TBA # License Apache-2.0 # Usage TBA # Intended uses Our model is intented to be used for **MT evaluation**. Given a a triplet with (source sentence, translation, reference translation) outputs a single score between 0 and 1 where 1 represents a perfect translation. # Languages Covered: This model builds on top of XLM-R which cover the following languages: Afrikaans, Albanian, Amharic, Arabic, Armenian, Assamese, Azerbaijani, Basque, Belarusian, Bengali, Bengali Romanized, Bosnian, Breton, Bulgarian, Burmese, Burmese, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Hausa, Hebrew, Hindi, Hindi Romanized, Hungarian, Icelandic, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish (Kurmanji), Kyrgyz, Lao, Latin, Latvian, Lithuanian, Macedonian, Malagasy, Malay, Malayalam, Marathi, Mongolian, Nepali, Norwegian, Oriya, Oromo, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Sanskri, Scottish, Gaelic, Serbian, Sindhi, Sinhala, Slovak, Slovenian, Somali, Spanish, Sundanese, Swahili, Swedish, Tamil, Tamil Romanized, Telugu, Telugu Romanized, Thai, Turkish, Ukrainian, Urdu, Urdu Romanized, Uyghur, Uzbek, Vietnamese, Welsh, Western, Frisian, Xhosa, Yiddish. Thus, results for language pairs containing uncovered languages are unreliable!
ISTNetworks/Arabic_chat_7B
ISTNetworks
2024-05-28T11:24:50Z
4
0
adapter-transformers
[ "adapter-transformers", "gguf", "modified version ", "ar", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-03-12T23:54:07Z
--- license: apache-2.0 language: - ar - en library_name: adapter-transformers tags: - 'modified version ' ---
adriansanz/te-zsc-synthetic_5ep_2805
adriansanz
2024-05-28T11:23:40Z
109
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:projecte-aina/roberta-base-ca-v2-cased-te", "base_model:finetune:projecte-aina/roberta-base-ca-v2-cased-te", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T10:22:22Z
--- license: apache-2.0 base_model: projecte-aina/roberta-base-ca-v2-cased-te tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: 080524_epoch_5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 080524_epoch_5 This model is a fine-tuned version of [projecte-aina/roberta-base-ca-v2-cased-te](https://huggingface.co/projecte-aina/roberta-base-ca-v2-cased-te) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3399 - Accuracy: 0.981 - Precision: 0.9810 - Recall: 0.981 - F1: 0.9810 - Ratio: 0.495 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 10 - eval_batch_size: 2 - seed: 47 - gradient_accumulation_steps: 2 - total_train_batch_size: 20 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.06 - lr_scheduler_warmup_steps: 4 - num_epochs: 1 - label_smoothing_factor: 0.1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Ratio | |:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-----:| | 0.3013 | 0.0333 | 10 | 0.3474 | 0.978 | 0.9783 | 0.978 | 0.9780 | 0.488 | | 0.3087 | 0.0667 | 20 | 0.3471 | 0.979 | 0.9790 | 0.979 | 0.9790 | 0.495 | | 0.3181 | 0.1 | 30 | 0.3527 | 0.975 | 0.9752 | 0.975 | 0.9750 | 0.489 | | 0.3134 | 0.1333 | 40 | 0.3602 | 0.971 | 0.9714 | 0.971 | 0.9710 | 0.485 | | 0.3002 | 0.1667 | 50 | 0.3481 | 0.979 | 0.9790 | 0.979 | 0.9790 | 0.501 | | 0.3226 | 0.2 | 60 | 0.3547 | 0.978 | 0.9780 | 0.978 | 0.9780 | 0.496 | | 0.2919 | 0.2333 | 70 | 0.3687 | 0.972 | 0.9724 | 0.972 | 0.9720 | 0.486 | | 0.2932 | 0.2667 | 80 | 0.3822 | 0.965 | 0.9664 | 0.965 | 0.9650 | 0.473 | | 0.3303 | 0.3 | 90 | 0.3754 | 0.969 | 0.9700 | 0.969 | 0.9690 | 0.477 | | 0.3162 | 0.3333 | 100 | 0.3557 | 0.975 | 0.9750 | 0.975 | 0.9750 | 0.505 | | 0.3012 | 0.3667 | 110 | 0.3554 | 0.974 | 0.9741 | 0.974 | 0.9740 | 0.506 | | 0.3337 | 0.4 | 120 | 0.3629 | 0.972 | 0.9725 | 0.972 | 0.9720 | 0.484 | | 0.3007 | 0.4333 | 130 | 0.3492 | 0.979 | 0.9792 | 0.979 | 0.9790 | 0.491 | | 0.3283 | 0.4667 | 140 | 0.3467 | 0.979 | 0.9790 | 0.979 | 0.9790 | 0.495 | | 0.3238 | 0.5 | 150 | 0.3410 | 0.981 | 0.9810 | 0.981 | 0.9810 | 0.497 | | 0.3076 | 0.5333 | 160 | 0.3387 | 0.982 | 0.9820 | 0.982 | 0.9820 | 0.498 | | 0.3348 | 0.5667 | 170 | 0.3375 | 0.982 | 0.9820 | 0.982 | 0.9820 | 0.498 | | 0.3258 | 0.6 | 180 | 0.3401 | 0.98 | 0.9801 | 0.98 | 0.9800 | 0.494 | | 0.3195 | 0.6333 | 190 | 0.3424 | 0.978 | 0.9781 | 0.978 | 0.9780 | 0.492 | | 0.31 | 0.6667 | 200 | 0.3392 | 0.981 | 0.9810 | 0.981 | 0.9810 | 0.495 | | 0.3407 | 0.7 | 210 | 0.3393 | 0.982 | 0.9820 | 0.982 | 0.9820 | 0.502 | | 0.3494 | 0.7333 | 220 | 0.3413 | 0.981 | 0.9810 | 0.981 | 0.9810 | 0.501 | | 0.3574 | 0.7667 | 230 | 0.3402 | 0.982 | 0.9820 | 0.982 | 0.9820 | 0.496 | | 0.3379 | 0.8 | 240 | 0.3385 | 0.982 | 0.9820 | 0.982 | 0.9820 | 0.496 | | 0.3532 | 0.8333 | 250 | 0.3385 | 0.982 | 0.9820 | 0.982 | 0.9820 | 0.496 | | 0.318 | 0.8667 | 260 | 0.3425 | 0.98 | 0.9801 | 0.98 | 0.9800 | 0.494 | | 0.3475 | 0.9 | 270 | 0.3432 | 0.98 | 0.9801 | 0.98 | 0.9800 | 0.494 | | 0.3142 | 0.9333 | 280 | 0.3408 | 0.981 | 0.9810 | 0.981 | 0.9810 | 0.495 | | 0.3421 | 0.9667 | 290 | 0.3404 | 0.981 | 0.9810 | 0.981 | 0.9810 | 0.495 | | 0.2935 | 1.0 | 300 | 0.3399 | 0.981 | 0.9810 | 0.981 | 0.9810 | 0.495 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
Likich/gemmainstruct-finetune-qualcoding_1000_prompt1_dot
Likich
2024-05-28T11:22:30Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-28T11:22:17Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Parth211/ppo-LunarLander-v2
Parth211
2024-05-28T11:19:04Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-05-28T11:18:44Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 255.33 +/- 19.77 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
sararelayter/kruidvat-model
sararelayter
2024-05-28T11:18:47Z
29
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-05-28T11:18:02Z
--- license: creativeml-openrail-m tags: - text-to-image --- ### Kruidvat model on Stable Diffusion via Dreambooth #### model by sararelayter This your the Stable Diffusion model fine-tuned the Kruidvat model concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **Actie!** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) Here are the images used for training this concept: ![image 0](https://huggingface.co/sararelayter/kruidvat-model/resolve/main/concept_images/1183184 - PLAYING KIDS - Kruidvat NL APK ACTIE - 174-page-00001.jpg) ![image 1](https://huggingface.co/sararelayter/kruidvat-model/resolve/main/concept_images/1183240 - PLAYING KIDS - Kruidvat NL APK ACTIE - 47-page-00001.jpg) ![image 2](https://huggingface.co/sararelayter/kruidvat-model/resolve/main/concept_images/1187170 - A-MERK - Kruidvat NL APK ACTIE - 37-page-00001.jpg) ![image 3](https://huggingface.co/sararelayter/kruidvat-model/resolve/main/concept_images/1184295 - TOMMY HILFIGER - Kruidvat NL APK ACTIE - 162-page-00001.jpg) ![image 4](https://huggingface.co/sararelayter/kruidvat-model/resolve/main/concept_images/1189486 - SPEELGOED - Kruidvat NL APK ACTIE - 87-page-00001.jpg) ![image 5](https://huggingface.co/sararelayter/kruidvat-model/resolve/main/concept_images/1189517 - WATSHOME - Kruidvat NL APK ACTIE - 14-page-00001.jpg) ![image 6](https://huggingface.co/sararelayter/kruidvat-model/resolve/main/concept_images/1191191 - EVORA TEXTIEL - Kruidvat NL APK ACTIE - 140-page-00001.jpg) ![image 7](https://huggingface.co/sararelayter/kruidvat-model/resolve/main/concept_images/1191486 - EVORA TEXTIEL - Kruidvat NL APK ACTIE - 151-page-00001.jpg) ![image 8](https://huggingface.co/sararelayter/kruidvat-model/resolve/main/concept_images/1188987 - WATSHOME - Kruidvat NL APK ACTIE - 123-page-00001.jpg) ![image 9](https://huggingface.co/sararelayter/kruidvat-model/resolve/main/concept_images/1191845 - SPEELGOED - Kruidvat NL APK ACTIE - 130-page-00001.jpg) ![image 10](https://huggingface.co/sararelayter/kruidvat-model/resolve/main/concept_images/1188126 - FASHION - Kruidvat NL APK ACTIE - 289-page-00001.jpg) ![image 11](https://huggingface.co/sararelayter/kruidvat-model/resolve/main/concept_images/1185666 - KENDALL & KYLIE - Kruidvat NL APK ACTIE - 168-page-00001.jpg) ![image 12](https://huggingface.co/sararelayter/kruidvat-model/resolve/main/concept_images/1186253 - EVORA - Kruidvat NL APK ACTIE - 11-page-00001.jpg) ![image 13](https://huggingface.co/sararelayter/kruidvat-model/resolve/main/concept_images/1192577 - FASHION - Kruidvat NL APK ACTIE - 290-page-00001.jpg) ![image 14](https://huggingface.co/sararelayter/kruidvat-model/resolve/main/concept_images/1192888 - DURACELL - Kruidvat NL APK ACTIE - 147-page-00001.jpg) ![image 15](https://huggingface.co/sararelayter/kruidvat-model/resolve/main/concept_images/1193469 - LICENTIE - Kruidvat NL APK ACTIE - 12-page-00001.jpg) ![image 16](https://huggingface.co/sararelayter/kruidvat-model/resolve/main/concept_images/1193440 - TRUESPIRIT - Kruidvat NL APK ACTIE - 111-page-00001.jpg) ![image 17](https://huggingface.co/sararelayter/kruidvat-model/resolve/main/concept_images/1193532 - TRUESPIRIT - Kruidvat NL APK ACTIE - 72-page-00001.jpg) ![image 18](https://huggingface.co/sararelayter/kruidvat-model/resolve/main/concept_images/1193535 - EVORA TEXTIEL - Kruidvat NL APK ACTIE - 170-page-00001.jpg) ![image 19](https://huggingface.co/sararelayter/kruidvat-model/resolve/main/concept_images/1195817 - MEGABLEU - Kruidvat NL APK ACTIE - 181-page-00001.jpg) ![image 20](https://huggingface.co/sararelayter/kruidvat-model/resolve/main/concept_images/1196055 - JUMBO - Kruidvat NL APK ACTIE - 62-page-00001.jpg) ![image 21](https://huggingface.co/sararelayter/kruidvat-model/resolve/main/concept_images/1195282 - CLEMENTONI - Kruidvat NL APK ACTIE - 49-page-00001.jpg) ![image 22](https://huggingface.co/sararelayter/kruidvat-model/resolve/main/concept_images/1194053 - EVORA TUIN - Kruidvat NL APK ACTIE - 67-page-00001.jpg) ![image 23](https://huggingface.co/sararelayter/kruidvat-model/resolve/main/concept_images/1193669 - EVORA - Kruidvat NL APK ACTIE - 110-page-00001.jpg) ![image 24](https://huggingface.co/sararelayter/kruidvat-model/resolve/main/concept_images/1193442 - TRUESPIRIT - Kruidvat NL APK ACTIE - 64-page-00001.jpg) ![image 25](https://huggingface.co/sararelayter/kruidvat-model/resolve/main/concept_images/1196707 - RAINBOW SURPRISE - Kruidvat NL APK ACTIE - 100-page-00001.jpg) ![image 26](https://huggingface.co/sararelayter/kruidvat-model/resolve/main/concept_images/1196720 - RAINBOW SURPRISE - Kruidvat NL APK ACTIE - 48-page-00001.jpg) ![image 27](https://huggingface.co/sararelayter/kruidvat-model/resolve/main/concept_images/1182041 - JANNEKE BRINKMAN - Kruidvat NL APK ACTIE - 65-page-00001.jpg) ![image 28](https://huggingface.co/sararelayter/kruidvat-model/resolve/main/concept_images/1180957 - PAW PATROL - Kruidvat NL APK ACTIE - 106-page-00001.jpg) ![image 29](https://huggingface.co/sararelayter/kruidvat-model/resolve/main/concept_images/1174217 - STABILO - Kruidvat NL APK ACTIE - 56-page-00001.jpg)
dawidmt/distill_pi
dawidmt
2024-05-28T11:17:58Z
111
1
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T11:11:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
vuongnhathien/convnext-base-wd1e-8-4e-5
vuongnhathien
2024-05-28T11:16:56Z
191
0
transformers
[ "transformers", "tensorboard", "safetensors", "convnextv2", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:facebook/convnextv2-base-22k-384", "base_model:finetune:facebook/convnextv2-base-22k-384", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-28T05:46:06Z
--- license: apache-2.0 base_model: facebook/convnextv2-base-22k-384 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: convnext-base-wd1e-8-4e-5 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.9468253968253968 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # convnext-base-wd1e-8-4e-5 This model is a fine-tuned version of [facebook/convnextv2-base-22k-384](https://huggingface.co/facebook/convnextv2-base-22k-384) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.2294 - Accuracy: 0.9468 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.6271 | 1.0 | 1099 | 0.3436 | 0.9054 | | 0.4528 | 2.0 | 2198 | 0.2767 | 0.9256 | | 0.3762 | 3.0 | 3297 | 0.2519 | 0.9268 | | 0.2979 | 4.0 | 4396 | 0.2414 | 0.9372 | | 0.2901 | 5.0 | 5495 | 0.2389 | 0.9427 | | 0.2381 | 6.0 | 6594 | 0.2408 | 0.9419 | | 0.2084 | 7.0 | 7693 | 0.2312 | 0.9463 | | 0.1742 | 8.0 | 8792 | 0.2359 | 0.9451 | | 0.1582 | 9.0 | 9891 | 0.2364 | 0.9479 | | 0.1451 | 10.0 | 10990 | 0.2357 | 0.9495 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
Netta1994/setfit_2k
Netta1994
2024-05-28T11:16:12Z
6
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2024-05-28T11:15:42Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # Netta1994/setfit_2k This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("Netta1994/setfit_2k") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
vuongnhathien/convnext-base-wd1e-8-2e-5
vuongnhathien
2024-05-28T11:14:53Z
194
0
transformers
[ "transformers", "tensorboard", "safetensors", "convnextv2", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:facebook/convnextv2-base-22k-384", "base_model:finetune:facebook/convnextv2-base-22k-384", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-28T05:48:07Z
--- license: apache-2.0 base_model: facebook/convnextv2-base-22k-384 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: convnext-base-wd1e-8-2e-5 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.946031746031746 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # convnext-base-wd1e-8-2e-5 This model is a fine-tuned version of [facebook/convnextv2-base-22k-384](https://huggingface.co/facebook/convnextv2-base-22k-384) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.2200 - Accuracy: 0.9460 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.6642 | 1.0 | 1099 | 0.3849 | 0.8946 | | 0.4658 | 2.0 | 2198 | 0.2898 | 0.9165 | | 0.3675 | 3.0 | 3297 | 0.2496 | 0.9304 | | 0.3174 | 4.0 | 4396 | 0.2326 | 0.9412 | | 0.3106 | 5.0 | 5495 | 0.2301 | 0.9435 | | 0.2678 | 6.0 | 6594 | 0.2303 | 0.9431 | | 0.2503 | 7.0 | 7693 | 0.2298 | 0.9427 | | 0.2204 | 8.0 | 8792 | 0.2216 | 0.9459 | | 0.2013 | 9.0 | 9891 | 0.2224 | 0.9463 | | 0.1808 | 10.0 | 10990 | 0.2207 | 0.9467 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
Debashish2412/cnn_news_summary_model_trained_on_reduced_data
Debashish2412
2024-05-28T11:13:32Z
104
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-28T10:46:56Z
--- license: apache-2.0 base_model: t5-small tags: - generated_from_trainer metrics: - rouge model-index: - name: cnn_news_summary_model_trained_on_reduced_data results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cnn_news_summary_model_trained_on_reduced_data This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6040 - Rouge1: 0.2179 - Rouge2: 0.0944 - Rougel: 0.1841 - Rougelsum: 0.184 - Generated Length: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Generated Length | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------------:| | No log | 1.0 | 431 | 1.6239 | 0.2174 | 0.0938 | 0.1831 | 0.183 | 19.0 | | 1.92 | 2.0 | 862 | 1.6075 | 0.2169 | 0.0937 | 0.183 | 0.1828 | 19.0 | | 1.8221 | 3.0 | 1293 | 1.6040 | 0.2179 | 0.0944 | 0.1841 | 0.184 | 19.0 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
ordis-co-ltd/sambanovasystems-SambaLingo-Thai-Chat-70B-Q4_K_M-gguf
ordis-co-ltd
2024-05-28T11:13:27Z
7
0
null
[ "gguf", "arxiv:2404.05829", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-28T10:52:28Z
This is the 4 bit quanitzed gguf model. Original model: https://huggingface.co/sambanovasystems/SambaLingo-Thai-Chat-70B # SambaLingo-Thai-Chat-70B <img src="/sambanovasystems/SambaLingo-Thai-Chat-70B/resolve/main/SambaLingo_Logo.png" width="340" style="margin-left:'auto' margin-right:'auto' display:'block'"/> <!-- Provide a quick summary of what the model is/does. --> SambaLingo-Thai-Chat-70B is a human aligned chat model trained in Thai and English. It is trained using direct preference optimization on top the base model [SambaLingo-Thai-Base-70B](https://huggingface.co/sambanovasystems/SambaLingo-Thai-Base-70B). The base model adapts [Llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b-hf) to Thai by training on 26 billion tokens from the Thai split of the [Cultura-X](https://huggingface.co/datasets/uonlp/CulturaX) dataset. Try This Model at [SambaLingo-chat-space](https://huggingface.co/spaces/sambanovasystems/SambaLingo-chat-space). ## Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [SambaNova Systems](https://sambanova.ai/) - **Model type:** Language Model - **Language(s):** Thai, English - **Finetuned from model:** [Llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b-hf) - **Paper:** [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829) - **Blog Post**: [sambalingo-open-source-language-experts](https://sambanova.ai/blog/sambalingo-open-source-language-experts) ## Getting Started ### Loading Model With Hugging Face Please make sure to set use_fast=False when loading the tokenizer. ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("sambanovasystems/SambaLingo-Thai-Chat-70B", use_fast=False) model = AutoModelForCausalLM.from_pretrained("sambanovasystems/SambaLingo-Thai-Chat-70B", device_map="auto", torch_dtype="auto") ``` ### Interacting With Model Pipeline Please make sure to set use_fast=False when loading the tokenizer. ```python from transformers import pipeline pipe = pipeline("text-generation", model="sambanovasystems/SambaLingo-Thai-Chat-70B", device_map="auto", use_fast=False) messages = [ {"role": "user", "content": {YOUR_QUESTION}}, ] prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipe(prompt)[0] outputs = outputs["generated_text"] ``` ### Suggested Inference Parameters - Temperature: 0.8 - Repetition penalty: 1.0 - Top-p: 0.9 ### Prompting Guidelines To prompt this model, please use the following chat template: ``` <|user|>\n{question}</s>\n<|assistant|>\n ``` ### Example Prompts and Generations ``` <|user|> ประเทศไทยช่วงเช้าเคารพธงชาติเมื่อไร</s> <|assistant|> ในประเทศไทย เวลาเคารพธงชาติคือเวลา 08.00 น. และ 18.00 น. ทุกวัน ประชาชนจะยืนตรงและร้องเพลงชาติในช่วงเวลาเหล่านี้เพื่อเป็นสัญลักษณ์ของความรักชาติและความเคารพต่อประเทศ ``` ## Training Details The alignment phase follows the recipe for [Zephyr-7B](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta), and comprises two stages: supervised fine-tuning (SFT) and Direct Performance Optimization (DPO). The SFT phase was done on the [ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) dataset mixed with the Google translated version of the ultrachat_200k dataset. It was trained for one epoch with global batch size 512 and max sequence length 2048 tokens. We used a linear decay learning rate of 2e-5 and 10% warmup. The DPO phase was done on the [ultrafeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) dataset and [cai-conversation-harmless](https://huggingface.co/datasets/HuggingFaceH4/cai-conversation-harmless) dataset, mixed with 10% of the data Google translated. It was trained with global batch size 32 and for three epochs. We used a linear decay learning rate of 5e-7, 10% warmup and β=0.1 as the regularization factor for DPO. ## Tokenizer Details We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language. ## Evaluation For evaluation results see our paper: [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829) ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> Use of this model is governed by the Meta’s [Llama 2 Community License Agreement](https://ai.meta.com/llama/license/). Please review and accept the license before downloading the model weights. ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> SambaLingo should NOT be used for: - Mission-critical applications - Applications that involve the safety of others - Making highly important decisions ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Like all LLMs, SambaLingo has certain limitations: - Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information. - Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output. - Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses. - Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited. - Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content. ## Acknowledgments We extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative. We would like to give a special thanks to the following groups: - Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset - Nguyen et al for open sourcing CulturaX dataset - CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset - EleutherAI for their open source evaluation framework - Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo ## Cite SambaLingo ``` @misc{csaki2024sambalingo, title={SambaLingo: Teaching Large Language Models New Languages}, author={Zoltan Csaki and Bo Li and Jonathan Li and Qiantong Xu and Pian Pawakapan and Leon Zhang and Yun Du and Hengyu Zhao and Changran Hu and Urmish Thakker}, year={2024}, eprint={2404.05829}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
PrathapThunga/New_FineTuned_EML
PrathapThunga
2024-05-28T11:11:55Z
4
0
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:unsloth/mistral-7b-v0.3-bnb-4bit", "base_model:finetune:unsloth/mistral-7b-v0.3-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-28T11:05:51Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl - sft base_model: unsloth/mistral-7b-v0.3-bnb-4bit --- # Uploaded model - **Developed by:** PrathapThunga - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
chainup244/Qwen-Qwen1.5-1.8B-1716894165
chainup244
2024-05-28T11:10:15Z
138
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-28T11:02:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DiederikMartens/mBERT_sa_cv_9_fold8
DiederikMartens
2024-05-28T10:59:21Z
108
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-multilingual-cased", "base_model:finetune:google-bert/bert-base-multilingual-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T10:37:20Z
--- license: apache-2.0 base_model: google-bert/bert-base-multilingual-cased tags: - generated_from_trainer metrics: - f1 model-index: - name: mBERT_sa_cv_9_fold8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mBERT_sa_cv_9_fold8 This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5598 - F1: 0.6214 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.47e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 325 | 0.5670 | 0.4760 | | 0.5615 | 2.0 | 650 | 0.4945 | 0.6084 | | 0.5615 | 3.0 | 975 | 0.5598 | 0.6214 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
tringuyen-uit/ER_new_context
tringuyen-uit
2024-05-28T10:55:04Z
57
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "question-answering", "generated_from_trainer", "base_model:VietAI/vit5-base", "base_model:finetune:VietAI/vit5-base", "license:mit", "text-generation-inference", "endpoints_compatible", "region:us" ]
question-answering
2024-05-28T02:26:58Z
--- license: mit base_model: VietAI/vit5-base tags: - generated_from_trainer model-index: - name: ER_new_context results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ER_new_context This model is a fine-tuned version of [VietAI/vit5-base](https://huggingface.co/VietAI/vit5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4057 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.2979 | 0.1 | 100 | 1.2437 | | 1.1026 | 0.19 | 200 | 0.7365 | | 0.7482 | 0.29 | 300 | 0.5781 | | 0.6258 | 0.38 | 400 | 0.5159 | | 0.5153 | 0.48 | 500 | 0.4504 | | 0.4802 | 0.57 | 600 | 0.4455 | | 0.4905 | 0.67 | 700 | 0.4059 | | 0.382 | 0.76 | 800 | 0.4778 | | 0.3728 | 0.86 | 900 | 0.3985 | | 0.3274 | 0.96 | 1000 | 0.3982 | | 0.3639 | 1.05 | 1100 | 0.4184 | | 0.2881 | 1.15 | 1200 | 0.4454 | | 0.3194 | 1.24 | 1300 | 0.3778 | | 0.2695 | 1.34 | 1400 | 0.3957 | | 0.2894 | 1.43 | 1500 | 0.4000 | | 0.276 | 1.53 | 1600 | 0.3984 | | 0.2325 | 1.62 | 1700 | 0.3627 | | 0.2192 | 1.72 | 1800 | 0.3782 | | 0.279 | 1.81 | 1900 | 0.4161 | | 0.2636 | 1.91 | 2000 | 0.4026 | | 0.2932 | 2.01 | 2100 | 0.3232 | | 0.206 | 2.1 | 2200 | 0.3633 | | 0.1865 | 2.2 | 2300 | 0.4019 | | 0.1651 | 2.29 | 2400 | 0.4385 | | 0.167 | 2.39 | 2500 | 0.4277 | | 0.1705 | 2.48 | 2600 | 0.4083 | | 0.2321 | 2.58 | 2700 | 0.3667 | | 0.1912 | 2.67 | 2800 | 0.3772 | | 0.192 | 2.77 | 2900 | 0.4032 | | 0.1881 | 2.87 | 3000 | 0.4059 | | 0.152 | 2.96 | 3100 | 0.4057 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
InfiniFlow/bce-reranker-base_v1
InfiniFlow
2024-05-28T10:53:01Z
249
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T10:43:30Z
--- license: apache-2.0 ---
DiederikMartens/gBERT_sa_cv_9_fold8
DiederikMartens
2024-05-28T10:48:24Z
108
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-german-cased", "base_model:finetune:google-bert/bert-base-german-cased", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T10:28:30Z
--- license: mit base_model: google-bert/bert-base-german-cased tags: - generated_from_trainer metrics: - f1 model-index: - name: gBERT_sa_cv_9_fold8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gBERT_sa_cv_9_fold8 This model is a fine-tuned version of [google-bert/bert-base-german-cased](https://huggingface.co/google-bert/bert-base-german-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4612 - F1: 0.7073 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.47e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 325 | 0.4051 | 0.5990 | | 0.4348 | 2.0 | 650 | 0.4612 | 0.7073 | | 0.4348 | 3.0 | 975 | 0.5851 | 0.6938 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
SparseLLM/prosparse-llama-2-13b
SparseLLM
2024-05-28T10:47:22Z
51
2
transformers
[ "transformers", "safetensors", "sparsellama", "feature-extraction", "text-generation", "custom_code", "en", "arxiv:2402.13516", "arxiv:2312.12456", "arxiv:2310.04564", "arxiv:2402.03804", "license:llama2", "region:us" ]
text-generation
2024-02-19T06:52:50Z
--- language: - en library_name: transformers license: llama2 pipeline_tag: text-generation --- # ProSparse-LLaMA-2-13B - Model creator: [Meta](https://huggingface.co/meta-llama) - Original model: [Llama 2 13B](https://huggingface.co/meta-llama/Llama-2-13b-hf) - Fine-tuned by: [THUNLP](https://nlp.csai.tsinghua.edu.cn/) and [ModelBest](modelbest.cn) - Paper: [link](https://arxiv.org/pdf/2402.13516.pdf) ### Introduction The utilization of activation sparsity, namely the existence of considerable weakly-contributed elements among activation outputs, is a promising method for inference acceleration of large language models (LLMs) ([Liu et al., 2023](https://proceedings.mlr.press/v202/liu23am/liu23am.pdf); [Song et al., 2023](https://arxiv.org/pdf/2312.12456.pdf)). Concretely, acceleration methods based on activation sparsity usually achieve higher inference speed by making wiser resource allocation and computation policies to avoid resource waste on these weakly-contributed parameters. Adopting ReLU as the activation function is a straightforward method to achieve activation sparsity. However, most recent mainstream LLMs adopt activation functions without intrinsic sparsity (e.g., GELU and Swish). Some efforts ([Zhang et al., 2022](https://aclanthology.org/2022.findings-acl.71.pdf); [Mirzadeh et al., 2023](https://arxiv.org/pdf/2310.04564.pdf); [Zhang et al., 2024](https://arxiv.org/pdf/2402.03804.pdf)) introduce ReLU or its variants as the substitutive activation function to help non-ReLU LLMs achieve activation sparsity and inference acceleration, but few can concurrently obtain high sparsity and comparable task-specific performance. In this work, we introduce a simple and effective sparsification method named "ProSparse" to push LLMs for higher activation sparsity while maintaining comparable performance. By applying ProSparse to Swish-activated LLaMA2-7B, LLaMA2-13B, and MiniCPM-1B, we obtain ReLU-activated models with high sparsity of 89.32%, 88.80%, and 87.89%, respectively, while their performance is comparable to the original version. These present the most sparsely activated models among open-source LLaMA versions and competitive end-size models, considerably surpassing ReluLLaMA-7B (66.98%) and ReluLLaMA-13B (71.56%). Further inference acceleration experiments demonstrate the practical speedup effects of higher sparsity on both [PowerInfer](https://arxiv.org/pdf/2312.12456.pdf) and our two sparse GPU [operators](https://github.com/Raincleared-Song/sparse_gpu_operator). ### Training Dataset We train the 13B model on about 134.22 billion tokens within 16,000 steps, including a mixture of the following two categories of data. - Language modeling datasets: * StarCoder * Wikipedia * Pile * Other collected datasets - Instruction tuning datasets: - UltraChat - P3 (multiple-choice QA) - PAQ - Unnatural Instructions - Flan - Super-Natural Instructions - Other collected datasets Intuitively, training the model with even more tokens or with data of a wider coverage and higher quality will obtain better task-specific performance. ### ProSparse: Training Methodology The training process of ProSparse consists of three steps (refer to Section 3.2 of [paper](https://arxiv.org/pdf/2402.13516.pdf) for more details): 1. **Activation Function Substitution**: We substitute the activation function of FFNs with ReLU and apply continual training; 2. **Progressive Sparsity Regularization**: We jointly optimize the model on the conventional next-token prediction loss and \\(L_1\\) regularization loss. The regularization is applied to the sparse intermediate outputs of FFNs with a regularization factor increasing progressively in multiple stages. Specifically, the regularization factor \\(\lambda\\) is set to a small constant for the warmup stage, and then increases along a smooth sine curve for each of the subsequent incremental stages. Each stage is accompanied by certain steps of training. In this way, the model can have more time to adapt to the increasing regularization without radical activation shifts, thus alleviating performance degradation. 3. **Activation Threshold Shifting**: We finally replace ReLU with FATReLU ([Kurtz et al., 2020](https://proceedings.mlr.press/v119/kurtz20a/kurtz20a.pdf)), a ReLU variant with a positive threshold. This can prune those non-zero weakly-contributed elements in activation outputs and further boost sparsity. The 13B model is trained on 32 A100 GPUs. The learning rate (LR) is controlled by a cosine scheduler with a peak LR of \\(5e-5\\). The hyper-parameters for each stage (including the regularization factor \\(\lambda_i\\), the accumulated training steps \\(T_i\\), and the accumulated training tokens) are shown as follows: | Step Number \\(i\\) | \\(\lambda_i\\) | \\(T_i\\) | Accumulated Tokens (B) | | :-------------: | :---------: | :----: | :--------------------: | | 0 | 0 | 5,500 | 46.14 | | 1 | \\(5e-3\\) | 6,750 | 56.62 | | 2 | \\(1e-2\\) | 10,750 | 90.18 | | 3 | \\(1e-2\\) | 11,000 | 92.27 | | 4 | \\(2e-2\\) | 15,000 | 125.83 | | 5 | \\(2e-2\\) | 16,000 | 134.22 | ### Evaluation Results The evaluation results on the above benchmarks demonstrate the advantage of ProSparse, which is the only method achieving high sparsity and comparable performance to the original Swish-activated LLaMA2. Note that models under all settings are trained with the same number of tokens on the same mixed dataset. Our evaluation is based on the framework [UltraEval](https://github.com/OpenBMB/UltraEval). The evaluation details are listed as follows: - **Code Generation**: We compute the average pass@1 scores on HumanEval (0-shot) and MBPP (3-shot). - **Commonsense Reasoning**: We report the average 0-shot accuracies on PIQA, SIQA, HellaSwag, WinoGrande, and COPA. - **Reading Comprehension**: We compute the average 0-shot accuracies on BoolQ, LAMBADA, and TyDi QA. - **Other Popular Benchmarks**: We report the average accuracies on GSM8K (8-shot), MMLU (5-shot), Big Bench Hard (BBH) (3-shot), and AGI-Eval (0-shot). **Notes**: For PIQA, SIQA, HellaSwag, WinoGrande, COPA, BoolQ, LAMBADA, TyDi QA, and AGI-Eval, we obtain the predicted answers based on maximized perplexity. For GSM8K, MMLU, and BBH, the predicted answers are directly generated. | Setting | Average<br>Sparsity | Average<br>Performance | Code<br>Generation | Commonsense<br>Reasoning | Reading<br>Comprehension | GSM8K | MMLU | BBH | AGI Eval | | :-------------------: | :----------------: | :----------------------: | :----------------------: | :---: | :---: | :---: | :---------: | :-----: | :-----------------: | | LLaMA2-7B | - | 37.96 | 16.37 | 69.59 | 61.87 | 12.96 | 44.45 | 32.96 | 27.53 | | ReluLLaMA-7B | 66.98 | 37.62 | 15.85 | 69.64 | 70.54 | 5.84 | 38.64 | 35.07 | 27.73 | | **ProSparse-7B**\* | 88.11 | 38.31 | 19.47 | 66.29 | 63.33 | 12.74 | 45.21 | 33.59 | 27.55 | | **ProSparse-7B** | **89.32** | **38.46** | 19.42 | 66.27 | 63.50 | 12.13 | 45.48 | 34.99 | 27.46 | | LLaMA2-13B | - | 44.06 | 20.19 | 72.58 | 71.55 | 22.21 | 54.69 | 37.89 | 29.33 | | ReluLLaMA-13B | 71.56 | 42.74 | 20.19 | 70.44 | 73.29 | 18.50 | 50.58 | 37.97 | 28.22 | | **ProSparse-13B**\* | 87.97 | **45.07** | 29.03 | 69.75 | 67.54 | 25.40 | 54.78 | 40.20 | 28.76 | | **ProSparse-13B** | **88.80** | 44.90 | 28.42 | 69.76 | 66.91 | 26.31 | 54.35 | 39.90 | 28.67 | | MiniCPM-1B | - | 44.44 | 36.85 | 63.67 | 60.90 | 35.48 | 50.44 | 35.03 | 28.71 | | **ProSparse-1B**\* | 86.25 | **44.72** | 41.38 | 64.55 | 60.69 | 34.72 | 49.36 | 34.04 | 28.27 | | **ProSparse-1B** | **87.89** | **44.72** | 42.04 | 64.37 | 60.73 | 34.57 | 49.51 | 34.08 | 27.77 | **Notes**: "Original" refers to the original Swish-activated LLaMA2 versions. ReluLLaMA-7B and ReluLLaMA-13B are available at [7B](https://huggingface.co/SparseLLM/ReluLLaMA-7B) and [13B](https://huggingface.co/SparseLLM/ReluLLaMA-13B) respectively. MiniCPM-1B is available at [1B](https://huggingface.co/openbmb/MiniCPM-1B-sft-bf16). "ProSparse-7B\*", "ProSparse-13B\*", and "ProSparse-1B\*" denote the ProSparse versions without activation threshold shifting. ### Evaluation Issues with LM-Eval The above results can be replicated with [UltraEval](https://github.com/OpenBMB/UltraEval). Some abnormal results obtained with other popular frameworks such as [LM-Eval](https://github.com/EleutherAI/lm-evaluation-harness) are probably attributed to the absence of the cls token `<s>`, which is not added by default in LM-Eval. A quick temporary fix is shown in the following codes. Other differences in evaluation results may be caused by other reasons, including the few-shot settings, data pre-processing, and extra prompts. ```python # https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/models/huggingface.py#L945 for _, context_enc, continuation_enc in chunk: # sanity check assert len(context_enc) > 0 # Note: a trivial fix here if context_enc[0] != 1: context_enc = [1] + context_enc assert len(continuation_enc) > 0 assert len(continuation_enc) <= self.max_length ``` Here are the steps to adapting the original [vLLM](https://github.com/vllm-project/vllm) to ProSparse models. 1. Replace the file [vllm/model_executor/models/llama.py](https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/models/llama.py) in original vLLM with this [file](https://github.com/Raincleared-Song/DejaVu_predictor/blob/main/llama.py). 2. Replace the contents of the original [config.json](https://huggingface.co/SparseLLM/prosparse-llama-2-13b/blob/main/config.json) with this [file](https://github.com/Raincleared-Song/DejaVu_predictor/blob/main/config_13b.json). 3. Set the environment variable `ACT_INFO`. To test the version without activation threshold shifting, `export ACT_INFO=relu`. To test the version with activation threshold shifting, `export ACT_INFO=fatrelu_0.01`. ### Inference Acceleration Effects First, we utilize [PowerInfer](https://arxiv.org/pdf/2312.12456.pdf), a state-of-the-art acceleration framework leveraging activation sparsity. As its inference speed and accuracy heavily rely on the performance of activation predictors, we report the activation recall and predicted sparsity (i.e., two key metrics for evaluating the activation predictor) as well as the number of tokens generated per second by PowerInfer (with one A100 GPU and sufficient CPUs). The GGUF files and activation predictors for ProSparse-13B are available at [ProSparse-LLaMA-2-13B-GGUF](https://huggingface.co/PowerInfer/prosparse-llama-2-13b-gguf) ([duplicate](https://huggingface.co/SparseLLM/prosparse-llama-2-13b-gguf)) and [ProSparse-LLaMA-2-13B-Predictor](https://huggingface.co/PowerInfer/prosparse-llama-2-13b-predictor) ([duplicate](https://huggingface.co/SparseLLM/prosparse-llama-2-13b-predictor)) respectively. Moreover, considering the potential inference inaccuracies caused by wrong predictions of activation predictors, we implement two sparse GPU [operators](https://github.com/Raincleared-Song/sparse_gpu_operator) for faster accurate inference utilizing activation sparsity. They are responsible for the speedup of two key steps in a gated FFN: - Step (2) (`S2`): a fused operator of ReLU and \\(\mathbf{s} \odot (\mathbf{x} \mathbf{W}_1^T)\\); - Step (3) (`S3`): a sparse matrix-vector multiplication operator \\(\mathbf{x}_1 \mathbf{W}_2^T\\). where \\(\mathbf{s}\\), \\(\mathbf{x}\\), \\(\mathbf{x}_1\\), and \\(\odot\\) denote the gating scores, the FFN input hidden states, the intermediate outputs, and the element-wise multiplication respectively. \\(\mathbf{W}_1\\) and \\(\mathbf{W}_2\\) are FFN weight matrices. The acceleration effects of LLMs with different sparsity are displayed as follows. ProSparse, which reaches a high sparsity without performance degradation, can gain the most benefits among all the settings concerned. Refer to Section 4.3 of [paper](https://arxiv.org/pdf/2402.13516.pdf) for more details. | Setting | Average<br>Sparsity | Activation<br>Recall | Predicted<br>Sparsity | PowerInfer<br>Speed | Speedup<br>to Dense | `S2`<br>Time | Speedup<br>to Dense | `S3`<br/>Time | Speedup<br/>to Dense | | :-------------------: | :-----------------: | :------------------: | :-------------------: | :-----------------: | :-----------------: | :--------------: | :-----------------: | :---------------: | :------------------: | | Dense-7B | - | - | - | 3.67 | 1.00 | 90.55 | 1.00 | 82.92 | 1.00 | | ReluLLaMA-7B | 66.98 | 90.89 | 58.95 | 11.37 | 3.10 | 67.12 | 1.35 | 63.00 | 1.32 | | **ProSparse-7B**\* | 88.11 | **93.46** | 75.24 | **16.30** | **4.44** | 46.66 | 1.94 | 55.56 | 1.49 | | **ProSparse-7B** | **89.32** | 92.34 | **78.75** | - | - | **45.38** | **2.00** | **55.05** | **1.51** | | Dense-13B | - | - | - | 1.92 | 1.00 | 131.36 | 1.00 | 113.68 | 1.00 | | ReluLLaMA-13B | 71.56 | 86.41 | 71.93 | 6.59 | 3.43 | 69.92 | 1.88 | 75.47 | 1.51 | | **ProSparse-13B**\* | 87.97 | 91.02 | 77.93 | **8.67** | **4.52** | 55.29 | 2.38 | 67.50 | 1.68 | | **ProSparse-13B** | **88.80** | **91.11** | **78.28** | - | - | **53.78** | **2.44** | **66.73** | **1.70** | **Notes**: For "Dense" settings, the "Inference Speed" (token/sec) is obtained by [llama.cpp](https://github.com/ggerganov/llama.cpp), and the time (us) for steps (2) and (3) is measured without sparse GPU operators. For other sparse settings, the "Inference Speed" is obtained by [PowerInfer](https://arxiv.org/pdf/2312.12456.pdf), and sparse GPU operators are applied. ProSparse settings with activation threshold shifting and the MiniCPM architecture are not supported by PowerInfer at present. ### License Disclaimer This model is bound by the license & usage restrictions of the original Llama-2 model and comes with no warranty or guarantees of any kind. ### Limitations & Biases Llama 2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned variant's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/ ### Citation Please kindly cite using the following BibTeX: ```bibtex @article{song2024prosparse, title={{ProSparse}: Introducing and Enhancing Intrinsic Activation Sparsity within Large Language Models}, author={Song, Chenyang and Han, Xu and Zhang, Zhengyan and Hu, Shengding and Shi, Xiyu and Li, Kuai and Chen, Chen and Liu, Zhiyuan and Li, Guangli and Yang, Tao and Sun, Maosong}, year={2024}, journal={arXiv preprint arXiv:2402.13516}, url={https://arxiv.org/pdf/2402.13516.pdf} } ``` #### Acknowledgments The model card is modified from [ReluLLaMA-13B](https://huggingface.co/SparseLLM/ReluLLaMA-13B).
C4Scale/deberta-v3-base_finetuned_bluegennx_run2.19_2e
C4Scale
2024-05-28T10:43:01Z
106
0
transformers
[ "transformers", "safetensors", "deberta-v2", "token-classification", "generated_from_trainer", "base_model:microsoft/deberta-v3-base", "base_model:finetune:microsoft/deberta-v3-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-05-28T09:06:59Z
--- license: mit base_model: microsoft/deberta-v3-base tags: - generated_from_trainer model-index: - name: deberta-v3-base_finetuned_bluegennx_run2.19_2e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-base_finetuned_bluegennx_run2.19_2e This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0201 - Overall Precision: 0.9745 - Overall Recall: 0.9862 - Overall F1: 0.9803 - Overall Accuracy: 0.9952 - Aadhar Card F1: 0.9837 - Age F1: 0.9633 - City F1: 0.9842 - Country F1: 0.9843 - Creditcardcvv F1: 0.9879 - Creditcardnumber F1: 0.9416 - Date F1: 0.9600 - Dateofbirth F1: 0.9023 - Email F1: 0.9900 - Expirydate F1: 0.9912 - Organization F1: 0.9910 - Pan Card F1: 0.9867 - Person F1: 0.9878 - Phonenumber F1: 0.9858 - Pincode F1: 0.9907 - Secondaryaddress F1: 0.9878 - State F1: 0.9909 - Time F1: 0.9820 - Url F1: 0.9949 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | Aadhar Card F1 | Age F1 | City F1 | Country F1 | Creditcardcvv F1 | Creditcardnumber F1 | Date F1 | Dateofbirth F1 | Email F1 | Expirydate F1 | Organization F1 | Pan Card F1 | Person F1 | Phonenumber F1 | Pincode F1 | Secondaryaddress F1 | State F1 | Time F1 | Url F1 | |:-------------:|:-----:|:-----:|:---------------:|:-----------------:|:--------------:|:----------:|:----------------:|:--------------:|:------:|:-------:|:----------:|:----------------:|:-------------------:|:-------:|:--------------:|:--------:|:-------------:|:---------------:|:-----------:|:---------:|:--------------:|:----------:|:-------------------:|:--------:|:-------:|:------:| | 0.0261 | 1.0 | 15321 | 0.0287 | 0.9619 | 0.9781 | 0.9700 | 0.9934 | 0.9613 | 0.9463 | 0.9541 | 0.9832 | 0.9793 | 0.9270 | 0.9481 | 0.8767 | 0.9793 | 0.9809 | 0.9882 | 0.9751 | 0.9840 | 0.9747 | 0.9835 | 0.9831 | 0.9620 | 0.9780 | 0.9873 | | 0.0152 | 2.0 | 30642 | 0.0201 | 0.9745 | 0.9862 | 0.9803 | 0.9952 | 0.9837 | 0.9633 | 0.9842 | 0.9843 | 0.9879 | 0.9416 | 0.9600 | 0.9023 | 0.9900 | 0.9912 | 0.9910 | 0.9867 | 0.9878 | 0.9858 | 0.9907 | 0.9878 | 0.9909 | 0.9820 | 0.9949 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
Unbabel/wmt20-comet-qe-da-marian
Unbabel
2024-05-28T10:42:23Z
0
0
null
[ "translation", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "om", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "uz", "vi", "xh", "yi", "zh", "license:apache-2.0", "region:us" ]
translation
2024-05-28T10:17:11Z
--- pipeline_tag: translation language: - multilingual - af - am - ar - as - az - be - bg - bn - br - bs - ca - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - hu - hy - id - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - 'no' - om - or - pa - pl - ps - pt - ro - ru - sa - sd - si - sk - sl - so - sq - sr - su - sv - sw - ta - te - th - tl - tr - ug - uk - ur - uz - vi - xh - yi - zh license: apache-2.0 --- Marian version of [wmt20-comet-qe-da](https://huggingface.co/Unbabel/wmt20-comet-qe-da). Credits to Microsoft Translate Team! # Paper TBA # License Apache-2.0 # Usage TBA # Intended uses Our model is intented to be used for **MT evaluation**. Given a a triplet with (source sentence, translation, reference translation) outputs a single score between 0 and 1 where 1 represents a perfect translation. # Languages Covered: This model builds on top of XLM-R which cover the following languages: Afrikaans, Albanian, Amharic, Arabic, Armenian, Assamese, Azerbaijani, Basque, Belarusian, Bengali, Bengali Romanized, Bosnian, Breton, Bulgarian, Burmese, Burmese, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Hausa, Hebrew, Hindi, Hindi Romanized, Hungarian, Icelandic, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish (Kurmanji), Kyrgyz, Lao, Latin, Latvian, Lithuanian, Macedonian, Malagasy, Malay, Malayalam, Marathi, Mongolian, Nepali, Norwegian, Oriya, Oromo, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Sanskri, Scottish, Gaelic, Serbian, Sindhi, Sinhala, Slovak, Slovenian, Somali, Spanish, Sundanese, Swahili, Swedish, Tamil, Tamil Romanized, Telugu, Telugu Romanized, Thai, Turkish, Ukrainian, Urdu, Urdu Romanized, Uyghur, Uzbek, Vietnamese, Welsh, Western, Frisian, Xhosa, Yiddish. Thus, results for language pairs containing uncovered languages are unreliable!
KJTMukisa/wav2vec2-large-xls-r-300m-lg-cv-1hr
KJTMukisa
2024-05-28T10:39:29Z
106
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_17_0", "base_model:facebook/wav2vec2-xls-r-300m", "base_model:finetune:facebook/wav2vec2-xls-r-300m", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-28T09:51:27Z
--- license: apache-2.0 base_model: facebook/wav2vec2-xls-r-300m tags: - generated_from_trainer datasets: - common_voice_17_0 metrics: - wer model-index: - name: wav2vec2-large-xls-r-300m-lg-cv-1hr results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: common_voice_17_0 type: common_voice_17_0 config: lg split: test args: lg metrics: - name: Wer type: wer value: 0.833292109819441 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-lg-cv-1hr This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_17_0 dataset. It achieves the following results on the evaluation set: - Loss: inf - Wer: 0.8333 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-------:|:----:|:---------------:|:------:| | 3.7902 | 22.2222 | 400 | inf | 0.8333 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
DiederikMartens/mBERT_sa_cv_9_fold7
DiederikMartens
2024-05-28T10:37:13Z
108
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-multilingual-cased", "base_model:finetune:google-bert/bert-base-multilingual-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T10:16:10Z
--- license: apache-2.0 base_model: google-bert/bert-base-multilingual-cased tags: - generated_from_trainer metrics: - f1 model-index: - name: mBERT_sa_cv_9_fold7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mBERT_sa_cv_9_fold7 This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5787 - F1: 0.4514 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.47e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 325 | 0.7377 | 0.2854 | | 0.7481 | 2.0 | 650 | 0.6467 | 0.3089 | | 0.7481 | 3.0 | 975 | 0.5787 | 0.4514 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
DiederikMartens/tsBERT_sa_cv_9_fold7
DiederikMartens
2024-05-28T10:36:36Z
108
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:igorsterner/german-english-code-switching-bert", "base_model:finetune:igorsterner/german-english-code-switching-bert", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T10:15:29Z
--- license: mit base_model: igorsterner/german-english-code-switching-bert tags: - generated_from_trainer metrics: - f1 model-index: - name: tsBERT_sa_cv_9_fold7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tsBERT_sa_cv_9_fold7 This model is a fine-tuned version of [igorsterner/german-english-code-switching-bert](https://huggingface.co/igorsterner/german-english-code-switching-bert) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4274 - F1: 0.6756 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.47e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 325 | 0.4375 | 0.6171 | | 0.439 | 2.0 | 650 | 0.4274 | 0.6756 | | 0.439 | 3.0 | 975 | 0.5711 | 0.6652 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
moetezsa/test1
moetezsa
2024-05-28T10:32:39Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-28T10:29:26Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
SiyuK/bart-cnn-samsum-finetuned
SiyuK
2024-05-28T10:32:08Z
111
0
transformers
[ "transformers", "tensorboard", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "dataset:samsum", "base_model:facebook/bart-large-cnn", "base_model:finetune:facebook/bart-large-cnn", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-23T05:20:00Z
--- license: mit base_model: facebook/bart-large-cnn tags: - generated_from_trainer datasets: - samsum model-index: - name: bart-cnn-samsum-finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-cnn-samsum-finetuned This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the samsum dataset. It achieves the following results on the evaluation set: - Loss: 0.3268 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.2322 | 1.0 | 37 | 0.3228 | | 0.2097 | 2.0 | 74 | 0.3163 | | 0.173 | 3.0 | 111 | 0.3172 | | 0.1685 | 4.0 | 148 | 0.3258 | | 0.1329 | 5.0 | 185 | 0.3268 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
digiplay/ti
digiplay
2024-05-28T10:25:11Z
0
0
null
[ "license:other", "region:us" ]
null
2024-05-28T08:42:12Z
--- license: other --- unaestheticXL_Alb2.safetensors https://civitai.com/models/119032?modelVersionId=363593
iLYoungZ/Qwen-1_8B-Law
iLYoungZ
2024-05-28T10:22:00Z
104
0
transformers
[ "transformers", "safetensors", "qwen", "feature-extraction", "llama-factory", "custom_code", "arxiv:1910.09700", "region:us" ]
feature-extraction
2024-05-28T10:20:44Z
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Unbabel/wmt20-comet-da-marian
Unbabel
2024-05-28T10:20:25Z
0
0
null
[ "translation", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "om", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "uz", "vi", "xh", "yi", "zh", "license:apache-2.0", "region:us" ]
translation
2024-05-28T10:11:17Z
--- pipeline_tag: translation language: - multilingual - af - am - ar - as - az - be - bg - bn - br - bs - ca - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - hu - hy - id - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - 'no' - om - or - pa - pl - ps - pt - ro - ru - sa - sd - si - sk - sl - so - sq - sr - su - sv - sw - ta - te - th - tl - tr - ug - uk - ur - uz - vi - xh - yi - zh license: apache-2.0 --- Marian version of [wmt20-comet-da](https://huggingface.co/Unbabel/wmt20-comet-da). Credits to Microsoft Translate Team! # Paper TBA # License Apache-2.0 # Usage TBA # Intended uses Our model is intented to be used for **MT evaluation**. Given a a triplet with (source sentence, translation, reference translation) outputs a single score between 0 and 1 where 1 represents a perfect translation. # Languages Covered: This model builds on top of XLM-R which cover the following languages: Afrikaans, Albanian, Amharic, Arabic, Armenian, Assamese, Azerbaijani, Basque, Belarusian, Bengali, Bengali Romanized, Bosnian, Breton, Bulgarian, Burmese, Burmese, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Hausa, Hebrew, Hindi, Hindi Romanized, Hungarian, Icelandic, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish (Kurmanji), Kyrgyz, Lao, Latin, Latvian, Lithuanian, Macedonian, Malagasy, Malay, Malayalam, Marathi, Mongolian, Nepali, Norwegian, Oriya, Oromo, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Sanskri, Scottish, Gaelic, Serbian, Sindhi, Sinhala, Slovak, Slovenian, Somali, Spanish, Sundanese, Swahili, Swedish, Tamil, Tamil Romanized, Telugu, Telugu Romanized, Thai, Turkish, Ukrainian, Urdu, Urdu Romanized, Uyghur, Uzbek, Vietnamese, Welsh, Western, Frisian, Xhosa, Yiddish. Thus, results for language pairs containing uncovered languages are unreliable!
DiederikMartens/mBERT_sa_cv_9_fold6
DiederikMartens
2024-05-28T10:16:03Z
108
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-multilingual-cased", "base_model:finetune:google-bert/bert-base-multilingual-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T09:54:52Z
--- license: apache-2.0 base_model: google-bert/bert-base-multilingual-cased tags: - generated_from_trainer metrics: - f1 model-index: - name: mBERT_sa_cv_9_fold6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mBERT_sa_cv_9_fold6 This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5171 - F1: 0.6418 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.47e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 325 | 0.4356 | 0.5107 | | 0.5161 | 2.0 | 650 | 0.4205 | 0.6172 | | 0.5161 | 3.0 | 975 | 0.5171 | 0.6418 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
PrithviS/ppo-LunarLander-v2
PrithviS
2024-05-28T10:15:26Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-05-28T10:15:07Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 276.29 +/- 19.52 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
DiederikMartens/tsBERT_sa_cv_9_fold6
DiederikMartens
2024-05-28T10:15:22Z
108
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:igorsterner/german-english-code-switching-bert", "base_model:finetune:igorsterner/german-english-code-switching-bert", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T09:54:20Z
--- license: mit base_model: igorsterner/german-english-code-switching-bert tags: - generated_from_trainer metrics: - f1 model-index: - name: tsBERT_sa_cv_9_fold6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tsBERT_sa_cv_9_fold6 This model is a fine-tuned version of [igorsterner/german-english-code-switching-bert](https://huggingface.co/igorsterner/german-english-code-switching-bert) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5501 - F1: 0.7050 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.47e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 325 | 0.4051 | 0.6175 | | 0.4383 | 2.0 | 650 | 0.3831 | 0.6857 | | 0.4383 | 3.0 | 975 | 0.5501 | 0.7050 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
mebinjoy/falcon2_finetuned
mebinjoy
2024-05-28T10:14:10Z
8
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "text-generation", "conversational", "dataset:timdettmers/openassistant-guanaco", "base_model:tiiuae/falcon-11B", "base_model:adapter:tiiuae/falcon-11B", "license:unknown", "region:us" ]
text-generation
2024-01-04T09:34:15Z
--- license: unknown library_name: peft tags: - trl - sft - generated_from_trainer base_model: tiiuae/falcon-11B model-index: - name: falcon2_guanaco results: [] datasets: - timdettmers/openassistant-guanaco pipeline_tag: text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # falcon2_guanaco This model is a fine-tuned version of [tiiuae/falcon-11B](https://huggingface.co/tiiuae/falcon-11B) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - training_steps: 500 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.11.2.dev0 - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
projecte-aina/multiner_ceil
projecte-aina
2024-05-28T10:13:00Z
34
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "ca", "dataset:projecte-aina/ceil", "arxiv:1907.11692", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-31T13:44:40Z
--- license: apache-2.0 datasets: - projecte-aina/ceil language: - ca metrics: - type: f1 value: 0.836 - type: precision value: 0.82069 - type: recall value: 0.8523 pipeline_tag: token-classification widget: - text: "El raper nord-americà Travis Scott ha gravat el videoclip de la seva cançó 'Circus Maximus' amb els Castellers de Vilafranca. Segons ha publicat la 'Revista Castells' i ha confirmat l'Agència Catalana de Notícies (ACN), el rodatge es va fer el 2 de juliol a la Tarraco Arena Plaça (TAP) de Tarragona." - text: "Les Guerres Carlines (dites també popularment en català carlinades) foren tres guerres que tingueren lloc a Espanya al segle xix com a expressió militar del moviment polític carlí i que al llarg del segle xix van enfrontar els carlins o carlistes i els seus descendents." - text: "El Centre de Coordinació de Rescat Marí de la ciutat de Novorossisk, a Crimea, ha confirmat que el petrolier ha patit danys, i ha explicat que el Servei de Salvament Marítim rus ha remolcat el vaixell. El vice-president del Consell de Seguretat de Rússia, Dmitri Medvédev, ha acusat Ucraïna de voler provocar una catàstrofe mediambiental al mar Negre" --- # Catalan BERTa (RoBERTa-large) finetuned for Named Entity Recognition. ## Table of Contents <details> <summary>Click to expand</summary> - [Model description](#model-description) - [Intended uses and limitations](#intended-uses-and-limitations) - [How to Use](#how-to-use) - [Training](#training) - [Training data](#training-data) - [Training procedure](#training-procedure) - [Evaluation](#evaluation) - [Variable and metrics](#variable-and-metrics) - [Evaluation results](#evaluation-results) - [Additional information](#addional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Citing information](#citing-information) - [Disclaimer](#disclaimer)) </details> ## Model description The **multiner** is a Named Entity Recognition (NER) model for the Catalan language fine-tuned from the [BERTa] model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the BERTa model card for more details). It has been trained with a dataset that contains 9 main types and 52 subtypes on all kinds of short texts, with almost 59K documents. ## Intended uses and limitations ## How to use from transformers import pipeline ``` pipe = pipeline("ner", model="projecte-aina/multiner_ceil") example = "George Smith Patton fué un general del Ejército de los Estados Unidos en Europa durante la Segunda Guerra Mundial. " ner_entity_results = pipe(example, aggregation_strategy="simple") print(ner_entity_results) ``` ## Limitations and bias At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. ## Training We used the NERC dataset in Catalan called [Catalan Entity Identification and Linking](https://huggingface.co/datasets/projecte-aina/ceil) for training and evaluation. ## Evaluation Accuracy was calculated using the development set, and reflects the non-balanced nature of the dataset. ### Major types | Type | Accuracy | num. Instances in dev set | | ------ | ------ | ------ | | CW * | 0.842 | 4551 | | GPE | 0.914 | 19751 | | Other | 0.69 | 2824 | | building | 0.736 | 2188 | | event | 0.739 | 3000 | | location | 0.819 | 3408 | | organization | 0.895 | 17285 | | person | 0.903 | 21689 | | product | 0.64 | 1038 | *: Cultural Work ### Subtypes | Type | Accuracy | num. Instances in dev set | | ------ | ------ | ------ | | CW-broadcastprogram | 0.854 | 765 | | CW-film | 0.809 | 549 | | CW-music | 0.862 | 1027 | | CW-other | 0.495 | 555 | | CW-painting | 0.654 | 205 | | CW-writtenart | 0.814 | 1450 | | GPE | 0.914 | 19751 | | Other | 0.69 | 2824 | | building-airport | 0.733 | 176 | | building-governmentfacility | 0.514 | 72 | | building-hospital | 0.805 | 113 | | building-hotel | 0.688 | 32 | | building-other | 0.726 | 1585 | | building-religious | 0.0 | 1 | | building-restaurant | 0.458 | 48 | | building-shops | 0.206 | 34 | | building-sportsfacility | 0.74 | 127 | | event-attack/terrorism/militaryconflict | 0.866 | 411 | | event-disaster | 0.261 | 23 | | event-other | 0.695 | 1069 | | event-political | 0.527 | 444 | | event-protest | 0.207 | 29 | | event-sportsevent | 0.822 | 1024 | | location-bodiesofwater | 0.865 | 673 | | location-island | 0.457 | 140 | | location-mountain | 0.781 | 515 | | location-other | 0.757 | 1602 | | location-park | 0.581 | 93 | | location-road/railway/highway/transit | 0.805 | 385 | | organization-education | 0.868 | 2097 | | organization-government | 0.905 | 2939 | | organization-media | 0.888 | 1963 | | organization-onlinebusiness | 0.538 | 197 | | organization-other | 0.788 | 4733 | | organization-politicalparty | 0.956 | 2272 | | organization-privatecompany | 0.849 | 1809 | | organization-religious | 0.638 | 210 | | organization-sportsteam | 0.946 | 1065 | | person-actor/director | 0.797 | 1480 | | person-artist/author | 0.853 | 5812 | | person-athlete | 0.871 | 1306 | | person-group | 0.485 | 699 | | person-influencer | 0.0 | 17 | | person-other | 0.811 | 8444 | | person-politician | 0.863 | 3259 | | person-scholar/scientist | 0.728 | 672 | | product-E-device | 0.51 | 102 | | product-clothing | 0.222 | 27 | | product-consumer_good | 0.0 | 20 | | product-food | 0.673 | 324 | | product-other | 0.0 | 69 | | product-software | 0.67 | 382 | | product-vehicle | 0.825 | 114 | ## Additional information ### Author Language Technologies Unit (LangTech) at the Barcelona Supercomputing Center ([email protected]) ### Contact information For further information, send an email to [email protected] ### Copyright Copyright (c) 2023 Language Technologies Unit (LangTech) at Barcelona Supercomputing Center ### Licensing Information [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work/research has been promoted and financed by the Government of Catalonia through the [Aina project](https://projecteaina.cat/). ### Citation information ### Disclaimer <details> <summary>Click to expand</summary> The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence. In no event shall the owner and creator of the models (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
haturusinghe/LLAMA3-Finetune-v1-1.73_loss-May-28-2024
haturusinghe
2024-05-28T10:12:41Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-28T10:11:57Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-Instruct-bnb-4bit --- # Uploaded model - **Developed by:** haturusinghe - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Harshsinghhh/shovel
Harshsinghhh
2024-05-28T10:12:31Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-28T10:12:02Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** Harshsinghhh - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
odedregev/Llama-2-7b-chat-hf-science-forum-sft
odedregev
2024-05-28T10:10:25Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-28T10:03:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DiederikMartens/gBERT_sa_cv_9_fold6
DiederikMartens
2024-05-28T10:08:12Z
108
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-german-cased", "base_model:finetune:google-bert/bert-base-german-cased", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T09:48:03Z
--- license: mit base_model: google-bert/bert-base-german-cased tags: - generated_from_trainer metrics: - f1 model-index: - name: gBERT_sa_cv_9_fold6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gBERT_sa_cv_9_fold6 This model is a fine-tuned version of [google-bert/bert-base-german-cased](https://huggingface.co/google-bert/bert-base-german-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3783 - F1: 0.6981 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.47e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 325 | 0.3943 | 0.6492 | | 0.4221 | 2.0 | 650 | 0.3783 | 0.6981 | | 0.4221 | 3.0 | 975 | 0.5532 | 0.6790 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
pduy395/custom-bert
pduy395
2024-05-28T10:05:42Z
212
0
transformers
[ "transformers", "safetensors", "roberta", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-05-28T10:05:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
derbaliSamar/TinnyLlamaFinetunning
derbaliSamar
2024-05-28T10:03:50Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/tinyllama-bnb-4bit", "base_model:finetune:unsloth/tinyllama-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-28T10:03:42Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/tinyllama-bnb-4bit --- # Uploaded model - **Developed by:** derbaliSamar - **License:** apache-2.0 - **Finetuned from model :** unsloth/tinyllama-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
asiansoul/CaddyMill-Llama-3-VIBE-KoEn-8B
asiansoul
2024-05-28T10:00:50Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-28T03:33:03Z
--- license: other license_name: other license_link: LICENSE --- <a href="https://ibb.co/NYn4fXc"><img src="https://i.ibb.co/0JVw1kH/Screenshot-2024-05-24-at-6-13-22-PM.png" alt="Screenshot-2024-05-24-at-6-13-22-PM" border="0"></a> Model Mixed by [Vibe Merge Method](https://medium.com/@puffanddmx82/vibe-enhancing-language-models-with-dynamic-attention-merge-method-2edc16726db7) Keep in mind that the accuracy of your desired questions may vary for this merge. When looking at an LLM, don't trust others, trust yourself by real fact check. Is there anyone who can afford to buy me a cup of coffee for the more work? Are you read for enjoying the LLM party ? [Toonation Donation](https://toon.at/donate/asiansoul) ETH/USDT(ERC20) Donation : 0x8BB117dD4Cc0E19E5536ab211070c0dE039a85c0 ``` merged info ref : “MLP-KTLim/llama-3-Korean-Bllossom-8B” base : “NousResearch/Meta-Llama-3–8B-Instruct” target : “maum-ai/Llama-3-MAAL-8B-Instruct-v0.1” ```
ShinjuM/rut5-base-simple-aphasia
ShinjuM
2024-05-28T09:58:45Z
106
0
transformers
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-28T09:55:44Z
--- library_name: transformers tags: [] widget: - text: >- Мази и гели – простой и эффективный способ справиться с болью в мышцах и суставах путем наружного применения. inference: parameters: num_beams: 10 no_repeat_ngram_size: 10 max_length: 500 do_sample: false --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sachin18/layoutlm-funsd-tf
sachin18
2024-05-28T09:56:37Z
61
0
transformers
[ "transformers", "tf", "layoutlm", "token-classification", "generated_from_keras_callback", "base_model:microsoft/layoutlm-base-uncased", "base_model:finetune:microsoft/layoutlm-base-uncased", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-05-26T19:06:20Z
--- license: mit tags: - generated_from_keras_callback base_model: microsoft/layoutlm-base-uncased model-index: - name: layoutlm-funsd-tf results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlm-funsd-tf This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results ### Framework versions - Transformers 4.41.1 - TensorFlow 2.16.1 - Datasets 2.19.1 - Tokenizers 0.19.1
DiederikMartens/mBERT_sa_cv_9_fold5
DiederikMartens
2024-05-28T09:54:45Z
108
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-multilingual-cased", "base_model:finetune:google-bert/bert-base-multilingual-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T09:33:29Z
--- license: apache-2.0 base_model: google-bert/bert-base-multilingual-cased tags: - generated_from_trainer metrics: - f1 model-index: - name: mBERT_sa_cv_9_fold5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mBERT_sa_cv_9_fold5 This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6589 - F1: 0.5859 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.47e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 325 | 0.4985 | 0.4957 | | 0.5084 | 2.0 | 650 | 0.5099 | 0.5604 | | 0.5084 | 3.0 | 975 | 0.6589 | 0.5859 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
DiederikMartens/tsBERT_sa_cv_9_fold5
DiederikMartens
2024-05-28T09:54:12Z
109
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:igorsterner/german-english-code-switching-bert", "base_model:finetune:igorsterner/german-english-code-switching-bert", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T09:33:05Z
--- license: mit base_model: igorsterner/german-english-code-switching-bert tags: - generated_from_trainer metrics: - f1 model-index: - name: tsBERT_sa_cv_9_fold5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tsBERT_sa_cv_9_fold5 This model is a fine-tuned version of [igorsterner/german-english-code-switching-bert](https://huggingface.co/igorsterner/german-english-code-switching-bert) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6017 - F1: 0.6669 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.47e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 325 | 0.4235 | 0.5871 | | 0.439 | 2.0 | 650 | 0.4690 | 0.6196 | | 0.439 | 3.0 | 975 | 0.6017 | 0.6669 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
fine-tuned/TRECCOVID-512-192-gpt-4o-2024-05-13-653452
fine-tuned
2024-05-28T09:50:43Z
7
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "Information", "Search", "Text", "Query", "Document", "en", "dataset:fine-tuned/TRECCOVID-512-192-gpt-4o-2024-05-13-653452", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-28T09:50:13Z
--- license: apache-2.0 datasets: - fine-tuned/TRECCOVID-512-192-gpt-4o-2024-05-13-653452 - allenai/c4 language: - en pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb - Information - Search - Text - Query - Document --- This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case: general domain ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/TRECCOVID-512-192-gpt-4o-2024-05-13-653452', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-14571
fine-tuned
2024-05-28T09:50:32Z
6
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "Health", "Medicine", "Treatment", "Diagnosis", "Research", "en", "dataset:fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-14571", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-28T09:50:02Z
--- license: apache-2.0 datasets: - fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-14571 - allenai/c4 language: - en pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb - Health - Medicine - Treatment - Diagnosis - Research --- This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case: medical domain ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-14571', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
fine-tuned/TRECCOVID-512-192-gpt-4o-2024-05-13-347397
fine-tuned
2024-05-28T09:47:48Z
4
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "COVID-19", "pandemic", "healthcare", "virus", "public health", "en", "dataset:fine-tuned/TRECCOVID-512-192-gpt-4o-2024-05-13-347397", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-28T09:47:16Z
--- license: apache-2.0 datasets: - fine-tuned/TRECCOVID-512-192-gpt-4o-2024-05-13-347397 - allenai/c4 language: - en pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb - COVID-19 - pandemic - healthcare - virus - public health --- This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case: COVID-19 ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/TRECCOVID-512-192-gpt-4o-2024-05-13-347397', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-607244
fine-tuned
2024-05-28T09:47:34Z
7
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "debate", "opposition", "dispute", "contradiction", "refutation", "en", "dataset:fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-607244", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-28T09:47:05Z
--- license: apache-2.0 datasets: - fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-607244 - allenai/c4 language: - en - en pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb - debate - opposition - dispute - contradiction - refutation --- This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case: None ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-607244', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-439294
fine-tuned
2024-05-28T09:47:15Z
4
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "Finance", "Investment", "Economy", "Markets", "Banking", "en", "dataset:fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-439294", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-28T09:46:47Z
--- license: apache-2.0 datasets: - fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-439294 - allenai/c4 language: - en pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb - Finance - Investment - Economy - Markets - Banking --- This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case: financial domain ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-439294', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
Jass07/mux_may25
Jass07
2024-05-28T09:41:14Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-28T09:36:35Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
JiAYu1997/HRJD_FinetuneV2_2.2
JiAYu1997
2024-05-28T09:41:07Z
0
0
null
[ "trl", "sft", "generated_from_trainer", "base_model:taide/Llama3-TAIDE-LX-8B-Chat-Alpha1", "base_model:finetune:taide/Llama3-TAIDE-LX-8B-Chat-Alpha1", "license:other", "region:us" ]
null
2024-05-28T08:52:24Z
--- license: other base_model: taide/Llama3-TAIDE-LX-8B-Chat-Alpha1 tags: - trl - sft - generated_from_trainer model-index: - name: HRJD_FinetuneV2_2.2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # HRJD_FinetuneV2_2.2 This model is a fine-tuned version of [taide/Llama3-TAIDE-LX-8B-Chat-Alpha1](https://huggingface.co/taide/Llama3-TAIDE-LX-8B-Chat-Alpha1) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - training_steps: 3000 ### Training results ### Framework versions - Transformers 4.33.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.13.3
DiederikMartens/eBERT_sa_cv_9_fold4
DiederikMartens
2024-05-28T09:37:41Z
108
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T09:15:37Z
--- license: apache-2.0 base_model: google-bert/bert-base-cased tags: - generated_from_trainer metrics: - f1 model-index: - name: eBERT_sa_cv_9_fold4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eBERT_sa_cv_9_fold4 This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6780 - F1: 0.5261 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.47e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 325 | 0.6047 | 0.4431 | | 0.6277 | 2.0 | 650 | 0.5383 | 0.4894 | | 0.6277 | 3.0 | 975 | 0.6780 | 0.5261 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
DiederikMartens/tsBERT_sa_cv_9_fold4
DiederikMartens
2024-05-28T09:32:58Z
108
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:igorsterner/german-english-code-switching-bert", "base_model:finetune:igorsterner/german-english-code-switching-bert", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T09:11:35Z
--- license: mit base_model: igorsterner/german-english-code-switching-bert tags: - generated_from_trainer metrics: - f1 model-index: - name: tsBERT_sa_cv_9_fold4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tsBERT_sa_cv_9_fold4 This model is a fine-tuned version of [igorsterner/german-english-code-switching-bert](https://huggingface.co/igorsterner/german-english-code-switching-bert) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5628 - F1: 0.7043 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.47e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 325 | 0.4678 | 0.6008 | | 0.4412 | 2.0 | 650 | 0.4482 | 0.6771 | | 0.4412 | 3.0 | 975 | 0.5628 | 0.7043 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
sihyeok000/licenseplateDetect
sihyeok000
2024-05-28T09:32:05Z
188
0
transformers
[ "transformers", "safetensors", "detr", "object-detection", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
object-detection
2024-05-28T09:31:42Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
khaninza/llava_model
khaninza
2024-05-28T09:32:02Z
3
0
peft
[ "peft", "safetensors", "llava_llama", "arxiv:1910.09700", "base_model:liuhaotian/llava-v1.6-mistral-7b", "base_model:adapter:liuhaotian/llava-v1.6-mistral-7b", "region:us" ]
null
2024-05-28T09:03:08Z
--- library_name: peft base_model: liuhaotian/llava-v1.6-mistral-7b --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
kaya218/llama3_finetune_kr
kaya218
2024-05-28T09:28:03Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-28T09:15:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DiederikMartens/gBERT_sa_cv_9_fold4
DiederikMartens
2024-05-28T09:27:44Z
108
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-german-cased", "base_model:finetune:google-bert/bert-base-german-cased", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T09:07:37Z
--- license: mit base_model: google-bert/bert-base-german-cased tags: - generated_from_trainer metrics: - f1 model-index: - name: gBERT_sa_cv_9_fold4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gBERT_sa_cv_9_fold4 This model is a fine-tuned version of [google-bert/bert-base-german-cased](https://huggingface.co/google-bert/bert-base-german-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6577 - F1: 0.6905 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.47e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 325 | 0.4453 | 0.5414 | | 0.4336 | 2.0 | 650 | 0.5178 | 0.6659 | | 0.4336 | 3.0 | 975 | 0.6577 | 0.6905 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
DineshRajanT/phi2-qlora-finetuned_new_v1_768_
DineshRajanT
2024-05-28T09:16:47Z
0
1
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-28T09:16:41Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gaianet/Llama-3-Instruct-8B-SimPO-GGUF
gaianet
2024-05-28T09:14:49Z
35
0
transformers
[ "transformers", "gguf", "llama", "text-generation", "en", "base_model:princeton-nlp/Llama-3-Instruct-8B-SimPO", "base_model:quantized:princeton-nlp/Llama-3-Instruct-8B-SimPO", "license:other", "autotrain_compatible", "region:us" ]
text-generation
2024-05-28T02:39:08Z
--- language: - en license: other license_name: llama3 model_name: Llama-3-Instruct-8B-SimPO base_model: princeton-nlp/Llama-3-Instruct-8B-SimPO inference: false model_creator: princeton-nlp model_type: llama pipeline_tag: text-generation quantized_by: Second State Inc. --- ![](https://github.com/GaiaNet-AI/.github/assets/45785633/d6976adc-f97d-4f86-a648-0f2f5c8e7eee) # Llama-3-Instruct-8B-SimPO-GGUF ## Original Model [princeton-nlp/Llama-3-Instruct-8B-SimPO](https://huggingface.co/princeton-nlp/Llama-3-Instruct-8B-SimPO) ## Run with Gaianet **Prompt template** prompt template: `llama-3-chat` **Context size** chat_ctx_size: `4096` **Run with GaiaNet** - Quick start: https://docs.gaianet.ai/node-guide/quick-start - Customize your node: https://docs.gaianet.ai/node-guide/customize ## Quantized GGUF Models | Name | Quant method | Bits | Size | Use case | | ---- | ---- | ---- | ---- | ----- | | [Llama-3-Instruct-8B-SimPO-Q2_K.gguf](https://huggingface.co/gaianet/Llama-3-Instruct-8B-SimPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-Q2_K.gguf) | Q2_K | 2 | 3.18 GB| smallest, significant quality loss - not recommended for most purposes | | [Llama-3-Instruct-8B-SimPO-Q3_K_L.gguf](https://huggingface.co/gaianet/Llama-3-Instruct-8B-SimPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-Q3_K_L.gguf) | Q3_K_L | 3 | 4.32 GB| small, substantial quality loss | | [Llama-3-Instruct-8B-SimPO-Q3_K_M.gguf](https://huggingface.co/gaianet/Llama-3-Instruct-8B-SimPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-Q3_K_M.gguf) | Q3_K_M | 3 | 4.02 GB| very small, high quality loss | | [Llama-3-Instruct-8B-SimPO-Q3_K_S.gguf](https://huggingface.co/gaianet/Llama-3-Instruct-8B-SimPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-Q3_K_S.gguf) | Q3_K_S | 3 | 3.66 GB| very small, high quality loss | | [Llama-3-Instruct-8B-SimPO-Q4_0.gguf](https://huggingface.co/gaianet/Llama-3-Instruct-8B-SimPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-Q4_0.gguf) | Q4_0 | 4 | 4.66 GB| legacy; small, very high quality loss - prefer using Q3_K_M | | [Llama-3-Instruct-8B-SimPO-Q4_K_M.gguf](https://huggingface.co/gaianet/Llama-3-Instruct-8B-SimPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-Q4_K_M.gguf) | Q4_K_M | 4 | 4.92 GB| medium, balanced quality - recommended | | [Llama-3-Instruct-8B-SimPO-Q4_K_S.gguf](https://huggingface.co/gaianet/Llama-3-Instruct-8B-SimPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-Q4_K_S.gguf) | Q4_K_S | 4 | 4.69 GB| small, greater quality loss | | [Llama-3-Instruct-8B-SimPO-Q5_0.gguf](https://huggingface.co/gaianet/Llama-3-Instruct-8B-SimPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-Q5_0.gguf) | Q5_0 | 5 | 5.6 GB| legacy; medium, balanced quality - prefer using Q4_K_M | | [Llama-3-Instruct-8B-SimPO-Q5_K_M.gguf](https://huggingface.co/gaianet/Llama-3-Instruct-8B-SimPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-Q5_K_M.gguf) | Q5_K_M | 5 | 5.73 GB| large, very low quality loss - recommended | | [Llama-3-Instruct-8B-SimPO-Q5_K_S.gguf](https://huggingface.co/gaianet/Llama-3-Instruct-8B-SimPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-Q5_K_S.gguf) | Q5_K_S | 5 | 5.6 GB| large, low quality loss - recommended | | [Llama-3-Instruct-8B-SimPO-Q6_K.gguf](https://huggingface.co/gaianet/Llama-3-Instruct-8B-SimPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-Q6_K.gguf) | Q6_K | 6 | 6.6 GB| very large, extremely low quality loss | | [Llama-3-Instruct-8B-SimPO-Q8_0.gguf](https://huggingface.co/gaianet/Llama-3-Instruct-8B-SimPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-Q8_0.gguf) | Q8_0 | 8 | 8.54 GB| very large, extremely low quality loss - not recommended | | [Llama-3-Instruct-8B-SimPO-f16.gguf](https://huggingface.co/gaianet/Llama-3-Instruct-8B-SimPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-f16.gguf) | f16 | 16 | 16.1 GB| | *Quantized with llama.cpp b2963.*
wegqrbeba/shawgpt-ft
wegqrbeba
2024-05-28T09:13:00Z
1
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "license:apache-2.0", "region:us" ]
null
2024-05-28T09:12:58Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: TheBloke/Mistral-7B-Instruct-v0.2-GPTQ model-index: - name: shawgpt-ft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # shawgpt-ft This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8869 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 4.5939 | 0.9231 | 3 | 3.9642 | | 4.0492 | 1.8462 | 6 | 3.4331 | | 3.4679 | 2.7692 | 9 | 2.9784 | | 2.2607 | 4.0 | 13 | 2.5703 | | 2.6935 | 4.9231 | 16 | 2.3356 | | 2.3731 | 5.8462 | 19 | 2.1395 | | 2.1518 | 6.7692 | 22 | 1.9966 | | 1.5471 | 8.0 | 26 | 1.9573 | | 2.0136 | 8.9231 | 29 | 1.8967 | | 1.3856 | 9.2308 | 30 | 1.8869 | ### Framework versions - PEFT 0.11.1 - Transformers 4.40.2 - Pytorch 2.1.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
wegqrbeba/localgpt-ft
wegqrbeba
2024-05-28T09:12:58Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-28T09:12:56Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DiederikMartens/tsBERT_sa_cv_9_fold3
DiederikMartens
2024-05-28T09:11:28Z
109
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:igorsterner/german-english-code-switching-bert", "base_model:finetune:igorsterner/german-english-code-switching-bert", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T08:50:03Z
--- license: mit base_model: igorsterner/german-english-code-switching-bert tags: - generated_from_trainer metrics: - f1 model-index: - name: tsBERT_sa_cv_9_fold3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tsBERT_sa_cv_9_fold3 This model is a fine-tuned version of [igorsterner/german-english-code-switching-bert](https://huggingface.co/igorsterner/german-english-code-switching-bert) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4441 - F1: 0.6870 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.47e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 325 | 0.3836 | 0.6556 | | 0.4526 | 2.0 | 650 | 0.3827 | 0.6609 | | 0.4526 | 3.0 | 975 | 0.4441 | 0.6870 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
QuantFactory/Phi-3-medium-4k-instruct-abliterated-v3-GGUF
QuantFactory
2024-05-28T09:10:30Z
37
2
null
[ "gguf", "nlp", "code", "text-generation", "multilingual", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-05-28T05:57:40Z
--- license: mit license_link: https://huggingface.co/microsoft/Phi-3-medium-4k-instruct/resolve/main/LICENSE language: - multilingual pipeline_tag: text-generation tags: - nlp - code inference: parameters: temperature: 0.7 widget: - messages: - role: user content: Can you provide ways to eat combinations of bananas and dragonfruits? --- # Phi-3-medium-4k-instruct-abliterated-v3-GGUF This is quantized version of [failspy/Phi-3-medium-4k-instruct-abliterated-v3](https://huggingface.co/failspy/Phi-3-medium-4k-instruct-abliterated-v3) created using llama.cpp # Model Description [failspy's Jupyter "cookbook" to replicate the methodology can be found here, refined library coming soon](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb) #### Phi-3-abliterated statement Took me a while to wizard this one up. It’s been a while since I’ve released a Phi-3 model. In the past I accidentally missed an item required in the model release process - hallucination testing. This model has been tested and though it is more likely to hallucinate than the original model in my experience, it is generally as stable as the original. Now that the new Phi-3 models are out, I'm working on completing this abliteration process quickly and then will release the other models as soon as possible. 🏇 ## Summary This is [microsoft/Phi-3-medium-4k-instruct](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) with orthogonalized bfloat16 safetensor weights, generated with a refined methodology based on that which was described in the preview paper/blog post: '[Refusal in LLMs is mediated by a single direction](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction)' which I encourage you to read to understand more. ## Hang on, "abliterated"? Orthogonalization? Ablation? What is this? TL;DR: This model has had certain weights manipulated to "inhibit" the model's ability to express refusal. It is not in anyway _guaranteed_ that it won't refuse you, understand your request, it may still lecture you about ethics/safety, etc. It is tuned in all other respects the same as the original 70B instruct model was, just with the strongest refusal directions orthogonalized out. **TL;TL;DR;DR: It's uncensored in the purest form I can manage -- no new or changed behaviour in any other respect from the original model.** As far as "abliterated": it's just a fun play-on-words using the original "ablation" term used in the original paper to refer to removing features, which I made up particularly to differentiate the model from "uncensored" fine-tunes. Ablate + obliterated = Abliterated Anyways, orthogonalization/ablation are both aspects to refer to the same thing here, the technique in which the refusal feature was "ablated" from the model was via orthogonalization. ## A little more on the methodology, and why this is interesting To me, ablation (or applying the methodology for the inverse, "augmentation") seems to be good for inducing/removing very specific features that you'd have to spend way too many tokens on encouraging or discouraging in your system prompt. Instead, you just apply your system prompt in the ablation script against a blank system prompt on the same dataset and orthogonalize for the desired behaviour in the final model weights. > Why this over fine-tuning? Ablation is much more surgical in nature whilst also being effectively executed with a _lot_ less data than fine-tuning, which I think is its main advantage. As well, and its most valuable aspect is it keeps as much of the original model's knowledge and training intact, whilst removing its tendency to behave in one very specific undesireable manner. (In this case, refusing user requests.) Fine tuning is still exceptionally useful and the go-to for broad behaviour changes; however, you may be able to get close to your desired behaviour with very few samples using the ablation/augmentation techniques. It may also be a useful step to add to your model refinement: orthogonalize -> fine-tune or vice-versa. I haven't really gotten around to exploring this model stacked with fine-tuning, I encourage others to give it a shot if they've got the capacity. > Okay, fine, but why V3? There's no V2? Well, I released a V2 of an abliterated model a while back for Meta-Llama-3-8B under Cognitive Computations. It ended up being not worth it to try V2 with larger models, I wanted to refine the model before wasting compute cycles on what might not even be a better model. I am however quite pleased about this latest methodology, it seems to have induced fewer hallucinations. So to show that it's a new fancy methodology from even that of the 8B V2, I decided to do a Microsoft and double up on my version jump because it's *such* an advancement (or so the excuse went, when in actuality it was because too many legacy but actively used Microsoft libraries checked for 'Windows 9' in the OS name to detect Windows 95/98 as one.) ## Quirkiness awareness notice This model may come with interesting quirks, with the methodology being so new. I encourage you to play with the model, and post any quirks you notice in the community tab, as that'll help us further understand what this orthogonalization has in the way of side effects. If you manage to develop further improvements, please share! This is really the most basic way to use ablation, but there are other possibilities that I believe are as-yet unexplored. Additionally, feel free to reach out in any way about this. I'm on the Cognitive Computations Discord, I'm watching the Community tab, reach out! I'd love to see this methodology used in other ways, and so would gladly support whoever whenever I can.