modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-07-14 12:27:51
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
520 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-07-14 12:25:52
card
stringlengths
11
1.01M
PipableAI/pip-api-data-training-v1
PipableAI
2024-05-24T12:06:15Z
9
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T08:32:23Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
L-NLProc/PredEx_Llama-2-7B_Pred
L-NLProc
2024-05-24T12:06:05Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-24T12:06:05Z
--- license: apache-2.0 ---
mlx-community/JOSIE-TinyLlama-1.1B-32k-base-8bit
mlx-community
2024-05-24T12:04:25Z
84
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mlx", "base_model:Doctor-Shotgun/TinyLlama-1.1B-32k", "base_model:finetune:Doctor-Shotgun/TinyLlama-1.1B-32k", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-24T11:59:08Z
--- library_name: transformers tags: - mlx base_model: Doctor-Shotgun/TinyLlama-1.1B-32k --- # mlx-community/JOSIE-TinyLlama-1.1B-32k-base-8bit This model was converted to MLX format from [`Isaak-Carter/JOSIE_TinyLlama_1.1B_32k_Base`]() using mlx-lm version **0.12.1**. Refer to the [original model card](https://huggingface.co/Isaak-Carter/JOSIE_TinyLlama_1.1B_32k_Base) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/JOSIE-TinyLlama-1.1B-32k-base-8bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
team-sanai/zoo_math_expert_epoch_1_v2
team-sanai
2024-05-24T11:59:21Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-24T11:56:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
IVN-RIN/medBIT-r3-plus
IVN-RIN
2024-05-24T11:58:02Z
158
2
transformers
[ "transformers", "pytorch", "safetensors", "bert", "fill-mask", "Biomedical Language Modeling", "it", "dataset:IVN-RIN/BioBERT_Italian", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-12-01T12:03:49Z
--- language: - it tags: - Biomedical Language Modeling widget: - text: >- L'asma allergica è una patologia dell'[MASK] respiratorio causata dalla presenza di allergeni responsabili dell'infiammazione dell'albero bronchiale. example_title: Example 1 - text: >- Il pancreas produce diversi [MASK] molto importanti tra i quali l'insulina e il glucagone. example_title: Example 2 - text: >- Il GABA è un amminoacido ed è il principale neurotrasmettitore inibitorio del [MASK]. example_title: Example 3 datasets: - IVN-RIN/BioBERT_Italian --- 🤗 + 📚🩺🇮🇹 + 📖🧑‍⚕️ + 🌐⚕️ = **MedBIT-r3-plus** From this repository you can download the **MedBIT-r3-plus** (Medical Bert for ITalian) checkpoint. **MedBIT-r3-plus** is built on top of [BioBIT](https://huggingface.co/IVN-RIN/bioBIT), further pretrained on a corpus of medical textbooks, either directly written by Italian authors or translated by human professional translators, used in formal medical doctors’ education and specialized training. The size of this corpus amounts to 100 MB of data. These comprehensive collections of medical concepts can impact the encoding of biomedical knowledge in language models, with the advantage of being natively available in Italian, and not being translated. Online healthcare information dissemination is another source of biomedical texts that is commonly available in many less-resourced languages. Therefore, we also gathered an additional 100 MB of web-crawled data from reliable Italian, health-related websites. More details in the paper. **MedBIT-r3-plus** has been evaluated on 3 downstream tasks: **NER** (Named Entity Recognition), extractive **QA** (Question Answering), **RE** (Relation Extraction). Here are the results, summarized: - NER: - [BC2GM](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb32) = 81.87% - [BC4CHEMD](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb35) = 80.68% - [BC5CDR(CDR)](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb31) = 81.97% - [BC5CDR(DNER)](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb31) = 76.32% - [NCBI_DISEASE](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb33) = 63.36% - [SPECIES-800](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb34) = 63.90% - QA: - [BioASQ 4b](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb30) = 68.21% - [BioASQ 5b](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb30) = 77.89% - [BioASQ 6b](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb30) = 75.28% - RE: - [CHEMPROT](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb36) = 38.82% - [BioRED](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb37) = 67.62% [Check the full paper](https://www.sciencedirect.com/science/article/pii/S1532046423001521) for further details, and feel free to contact us if you have some inquiry!
FO-UA/adapt-llm-Timesheet-Fr-40xr512
FO-UA
2024-05-24T11:57:49Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:AdaptLLM/finance-chat", "base_model:adapter:AdaptLLM/finance-chat", "region:us" ]
null
2024-05-24T11:56:35Z
--- library_name: peft base_model: AdaptLLM/finance-chat --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
IVN-RIN/medBIT
IVN-RIN
2024-05-24T11:57:39Z
120
2
transformers
[ "transformers", "pytorch", "safetensors", "bert", "fill-mask", "Biomedical Language Modeling", "it", "dataset:IVN-RIN/BioBERT_Italian", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-12-02T12:48:47Z
--- language: - it tags: - Biomedical Language Modeling widget: - text: >- L'asma allergica è una patologia dell'[MASK] respiratorio causata dalla presenza di allergeni responsabili dell'infiammazione dell'albero bronchiale. example_title: Example 1 - text: >- Il pancreas produce diversi [MASK] molto importanti tra i quali l'insulina e il glucagone. example_title: Example 2 - text: >- Il GABA è un amminoacido ed è il principale neurotrasmettitore inibitorio del [MASK]. example_title: Example 3 datasets: - IVN-RIN/BioBERT_Italian --- 🤗 + 📚🩺🇮🇹 + 📖🧑‍⚕️ = **MedBIT** From this repository you can download the **MedBIT** (Medical Bert for ITalian) checkpoint. MedBIT is built on top of [BioBIT](https://huggingface.co/IVN-RIN/bioBIT), further pretrained on a corpus of medical textbooks, either directly written by Italian authors or translated by human professional translators, used in formal medical doctors’ education and specialized training. The size of this corpus amounts to 100 MB of data. These comprehensive collections of medical concepts can impact the encoding of biomedical knowledge in language models, with the advantage of being natively available in Italian, and not being translated. More details in the paper. [Check the full paper](https://www.sciencedirect.com/science/article/pii/S1532046423001521) for further details, and feel free to contact us if you have some inquiry!
MuntasirHossain/Meta-Llama-3-8B-OpenOrca
MuntasirHossain
2024-05-24T11:57:13Z
12
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "orpo", "llama 3", "sft", "en", "dataset:Open-Orca/OpenOrca", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-14T07:13:27Z
--- language: - en library_name: transformers tags: - orpo - llama 3 - sft datasets: - Open-Orca/OpenOrca --- # Model description Meta-Llama-3-8B-OpenOrca is a fine-tuned version of the [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on 1.5k subsamples of the [OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) dataset. This model follows the ChatML chat template! ## How to use ```` import torch from transformers import AutoTokenizer, pipeline model = "MuntasirHossain/Meta-Llama-3-8B-OpenOrca" tokenizer = AutoTokenizer.from_pretrained(model) llm = pipeline( task = "text-generation", model=model, eos_token_id=tokenizer.eos_token_id, torch_dtype=torch.float16, max_new_tokens=256, do_sample=True, device_map="auto", ) def generate(input_text): system_prompt = "You are a helpful AI assistant." messages = [ {"role": "system", "content": system_prompt}, {"role": "user", "content": input_text}, ] prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = llm(prompt) return outputs[0]["generated_text"][len(prompt):] generate("What is a large language model?") ````
IVN-RIN/bioBIT
IVN-RIN
2024-05-24T11:57:03Z
375
1
transformers
[ "transformers", "pytorch", "safetensors", "bert", "fill-mask", "Biomedical Language Modeling", "it", "dataset:IVN-RIN/BioBERT_Italian", "arxiv:1901.08746", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-12-03T11:17:50Z
--- language: - it tags: - Biomedical Language Modeling widget: - text: >- L'asma allergica è una patologia dell'[MASK] respiratorio causata dalla presenza di allergeni responsabili dell'infiammazione dell'albero bronchiale. example_title: Example 1 - text: >- Il pancreas produce diversi [MASK] molto importanti tra i quali l'insulina e il glucagone. example_title: Example 2 - text: >- Il GABA è un amminoacido ed è il principale neurotrasmettitore inibitorio del [MASK]. example_title: Example 3 datasets: - IVN-RIN/BioBERT_Italian --- 🤗 + 📚🩺🇮🇹 = **BioBIT** From this repository you can download the **BioBIT** (Biomedical Bert for ITalian) checkpoint. **BioBIT** stems from [Italian XXL BERT](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased), obtained from a recent Wikipedia dump and various texts in Italian from the OPUS and OSCAR corpora collection, which sums up to the final corpus size of 81 GB and 13B tokens. To pretrain **BioBIT**, we followed the general approach outlined in [BioBERT paper](https://arxiv.org/abs/1901.08746), built on the foundation of the BERT architecture. The pretraining objective is a combination of **MLM** (Masked Language Modelling) and **NSP** (Next Sentence Prediction). The MLM objective is based on randomly masking 15% of the input sequence, trying then to predict the missing tokens; for the NSP objective, instead, the model is given a couple of sentences and has to guess if the second comes after the first in the original document. Due to the unavailability of an Italian equivalent for the millions of abstracts and full-text scientific papers used by English, BERT-based biomedical models, in this work we leveraged machine translation to obtain an Italian biomedical corpus based on PubMed abstracts and train **BioBIT**. More details in the paper. **BioBIT** has been evaluated on 3 downstream tasks: **NER** (Named Entity Recognition), extractive **QA** (Question Answering), **RE** (Relation Extraction). Here are the results, summarized: - NER: - [BC2GM](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb32) = 82.14% - [BC4CHEMD](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb35) = 80.70% - [BC5CDR(CDR)](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb31) = 82.15% - [BC5CDR(DNER)](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb31) = 76.27% - [NCBI_DISEASE](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb33) = 65.06% - [SPECIES-800](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb34) = 61.86% - QA: - [BioASQ 4b](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb30) = 68.49% - [BioASQ 5b](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb30) = 78.33% - [BioASQ 6b](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb30) = 75.73% - RE: - [CHEMPROT](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb36) = 38.16% - [BioRED](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb37) = 67.15% [Check the full paper](https://www.sciencedirect.com/science/article/pii/S1532046423001521) for further details, and feel free to contact us if you have some inquiry!
DanteKwak/Llama-3-8B-Data-Science-Interview-QNA
DanteKwak
2024-05-24T11:52:14Z
66
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-24T11:37:26Z
--- license: apache-2.0 ---
hgnoi/fci2jnBIFDSIiMIG
hgnoi
2024-05-24T11:48:02Z
138
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-24T11:44:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
stinkypoop/qw
stinkypoop
2024-05-24T11:41:50Z
0
0
peft
[ "peft", "safetensors", "llama-factory", "lora", "generated_from_trainer", "base_model:Qwen/Qwen1.5-0.5B-Chat", "base_model:adapter:Qwen/Qwen1.5-0.5B-Chat", "license:other", "region:us" ]
null
2024-05-24T11:40:55Z
--- license: other library_name: peft tags: - llama-factory - lora - generated_from_trainer base_model: Qwen/Qwen1.5-0.5B-Chat model-index: - name: QwenTT2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # QwenTT2 This model is a fine-tuned version of [Qwen/Qwen1.5-0.5B-Chat](https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat) on the identity dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.11.1 - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
CarlosJefte/llama-3-8b-bnb-4bit
CarlosJefte
2024-05-24T11:27:01Z
67
0
transformers
[ "transformers", "pytorch", "safetensors", "gguf", "llama", "text-generation-inference", "unsloth", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:quantized:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-13T17:26:27Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** CarlosJefte - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
iot/Gemma_model_fine_tune_custom_Data
iot
2024-05-24T11:26:39Z
77
0
transformers
[ "transformers", "tensorboard", "safetensors", "gemma", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-24T11:06:06Z
# Gemma Model Fine-Tuned on Custom Data ## Model Description This model is a fine-tuned version of Gemma Model on custom data. It was trained using the SFTTrainer and incorporates LoRA configurations to enhance performance. ## Training Procedure - **Batch size**: 1 - **Gradient accumulation steps**: 4 - **Learning rate**: 2e-4 - **Warmup steps**: 2 - **Max steps**: 100 - **Optimizer**: Paged AdamW 8-bit - **FP16**: Enabled ## Usage You can use this model, Below is an example of how to load and use the model: ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("iot/Gemma_model_fine_tune_custom_Data") model = AutoModelForCausalLM.from_pretrained("iot/Gemma_model_fine_tune_custom_Data") input_text = "Your input text here" inputs = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0], skip_special_tokens=True))
sravan-gorugantu/model2024-05-24
sravan-gorugantu
2024-05-24T11:25:19Z
167
0
transformers
[ "transformers", "tensorboard", "safetensors", "hubert", "audio-classification", "generated_from_trainer", "dataset:audiofolder", "base_model:ntu-spml/distilhubert", "base_model:finetune:ntu-spml/distilhubert", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
audio-classification
2024-05-24T08:51:49Z
--- license: apache-2.0 base_model: ntu-spml/distilhubert tags: - generated_from_trainer datasets: - audiofolder metrics: - accuracy model-index: - name: model2024-05-24 results: - task: name: Audio Classification type: audio-classification dataset: name: audiofolder type: audiofolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.965810121118249 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model2024-05-24 This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the audiofolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0906 - Accuracy: 0.9658 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2199 | 1.0 | 615 | 0.1805 | 0.9320 | | 0.179 | 2.0 | 1231 | 0.1230 | 0.9534 | | 0.1089 | 3.0 | 1846 | 0.1019 | 0.9616 | | 0.1152 | 4.0 | 2462 | 0.0939 | 0.9645 | | 0.0811 | 5.0 | 3075 | 0.0906 | 0.9658 | ### Framework versions - Transformers 4.38.1 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.2
hgnoi/QyHtpZj9DaOwU3yr
hgnoi
2024-05-24T11:22:19Z
132
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-24T11:20:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mirella-guenther/openai-whisper-large-torgo
mirella-guenther
2024-05-24T11:18:27Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-24T11:18:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Abdulrocks79/llama-3-8b-Instruct-bnb-4bit-abdulrocks79-demo
Abdulrocks79
2024-05-24T11:16:35Z
2
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:quantized:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-24T11:14:22Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf base_model: unsloth/llama-3-8b-Instruct-bnb-4bit --- # Uploaded model - **Developed by:** Abdulrocks79 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
ik28/mistral-med-instruct
ik28
2024-05-24T11:14:58Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-04-22T08:44:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
chenehlf/transgh
chenehlf
2024-05-24T11:13:34Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-24T11:13:14Z
--- license: apache-2.0 ---
drMurder/video-retalking
drMurder
2024-05-24T11:10:14Z
0
0
null
[ "arxiv:2211.14758", "region:us" ]
null
2024-05-24T10:55:51Z
<div align="center"> <h2>VideoReTalking <br/> <span style="font-size:12px">Audio-based Lip Synchronization for Talking Head Video Editing in the Wild</span> </h2> <a href='https://arxiv.org/abs/2211.14758'><img src='https://img.shields.io/badge/ArXiv-2211.14758-red'></a> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<a href='https://vinthony.github.io/video-retalking/'><img src='https://img.shields.io/badge/Project-Page-Green'></a> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/vinthony/video-retalking/blob/main/quick_demo.ipynb)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; [![Replicate](https://replicate.com/cjwbw/video-retalking/badge)](https://replicate.com/cjwbw/video-retalking) <div> <a target='_blank'>Kun Cheng <sup>*,1,2</sup> </a>&emsp; <a href='https://vinthony.github.io/' target='_blank'>Xiaodong Cun <sup>*,2</a>&emsp; <a href='https://yzhang2016.github.io/yongnorriszhang.github.io/' target='_blank'>Yong Zhang <sup>2</sup></a>&emsp; <a href='https://menghanxia.github.io/' target='_blank'>Menghan Xia <sup>2</sup></a>&emsp; <a href='https://feiiyin.github.io/' target='_blank'>Fei Yin <sup>2,3</sup></a>&emsp;<br/> <a href='https://web.xidian.edu.cn/mrzhu/en/index.html' target='_blank'>Mingrui Zhu <sup>1</sup></a>&emsp; <a href='https://xuanwangvc.github.io/' target='_blank'>Xuan Wang <sup>2</sup></a>&emsp; <a href='https://juewang725.github.io/' target='_blank'>Jue Wang <sup>2</sup></a>&emsp; <a href='https://web.xidian.edu.cn/nnwang/en/index.html' target='_blank'>Nannan Wang <sup>1</sup></a> </div> <br> <div> <sup>1</sup> Xidian University &emsp; <sup>2</sup> Tencent AI Lab &emsp; <sup>3</sup> Tsinghua University </div> <br> <i><strong><a href='https://sa2022.siggraph.org/' target='_blank'>SIGGRAPH Asia 2022 Conference Track</a></strong></i> <br> <br> <img src="https://opentalker.github.io/video-retalking/static/images/teaser.png" width="768px"> <div align="justify"> <BR> We present VideoReTalking, a new system to edit the faces of a real-world talking head video according to input audio, producing a high-quality and lip-syncing output video even with a different emotion. Our system disentangles this objective into three sequential tasks: <BR> (1) face video generation with a canonical expression <BR> (2) audio-driven lip-sync and <BR> (3) face enhancement for improving photo-realism. <BR> Given a talking-head video, we first modify the expression of each frame according to the same expression template using the expression editing network, resulting in a video with the canonical expression. This video, together with the given audio, is then fed into the lip-sync network to generate a lip-syncing video. Finally, we improve the photo-realism of the synthesized faces through an identity-aware face enhancement network and post-processing. We use learning-based approaches for all three steps and all our modules can be tackled in a sequential pipeline without any user intervention.</div> <BR> <p> <img alt='pipeline' src="./docs/static/images/pipeline.png?raw=true" width="768px"><br> <em align='center'>Pipeline</em> </p> </div> ## Results in the Wild (contains audio) https://user-images.githubusercontent.com/4397546/224310754-665eb2dd-aadc-47dc-b1f9-2029a937b20a.mp4 ## Environment ``` git clone https://github.com/vinthony/video-retalking.git cd video-retalking conda create -n video_retalking python=3.8 conda activate video_retalking conda install ffmpeg # Please follow the instructions from https://pytorch.org/get-started/previous-versions/ # This installation command only works on CUDA 11.1 pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 -f https://download.pytorch.org/whl/torch_stable.html pip install -r requirements.txt ``` ## Quick Inference #### Pretrained Models Please download our [pre-trained models](https://drive.google.com/drive/folders/18rhjMpxK8LVVxf7PI6XwOidt8Vouv_H0?usp=share_link) and put them in `./checkpoints`. <!-- We also provide some [example videos and audio](https://drive.google.com/drive/folders/14OwbNGDCAMPPdY-l_xO1axpUjkPxI9Dv?usp=share_link). Please put them in `./examples`. --> #### Inference ``` python3 inference.py \ --face examples/face/1.mp4 \ --audio examples/audio/1.wav \ --outfile results/1_1.mp4 ``` This script includes data preprocessing steps. You can test any talking face videos without manual alignment. But it is worth noting that DNet cannot handle extreme poses. You can also control the expression by adding the following parameters: ```--exp_img```: Pre-defined expression template. The default is "neutral". You can choose "smile" or an image path. ```--up_face```: You can choose "surprise" or "angry" to modify the expression of upper face with [GANimation](https://github.com/donydchen/ganimation_replicate). ## Citation If you find our work useful in your research, please consider citing: ``` @misc{cheng2022videoretalking, title={VideoReTalking: Audio-based Lip Synchronization for Talking Head Video Editing In the Wild}, author={Kun Cheng and Xiaodong Cun and Yong Zhang and Menghan Xia and Fei Yin and Mingrui Zhu and Xuan Wang and Jue Wang and Nannan Wang}, year={2022}, eprint={2211.14758}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ## Acknowledgement Thanks to [Wav2Lip](https://github.com/Rudrabha/Wav2Lip), [PIRenderer](https://github.com/RenYurui/PIRender), [GFP-GAN](https://github.com/TencentARC/GFPGAN), [GPEN](https://github.com/yangxy/GPEN), [ganimation_replicate](https://github.com/donydchen/ganimation_replicate), [STIT](https://github.com/rotemtzaban/STIT) for sharing their code. ## Related Work - [StyleHEAT: One-Shot High-Resolution Editable Talking Face Generation via Pre-trained StyleGAN (ECCV 2022)](https://github.com/FeiiYin/StyleHEAT) - [CodeTalker: Speech-Driven 3D Facial Animation with Discrete Motion Prior (CVPR 2023)](https://github.com/Doubiiu/CodeTalker) - [SadTalker: Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation (CVPR 2023)](https://github.com/Winfredy/SadTalker) - [DPE: Disentanglement of Pose and Expression for General Video Portrait Editing (CVPR 2023)](https://github.com/Carlyx/DPE) - [3D GAN Inversion with Facial Symmetry Prior (CVPR 2023)](https://github.com/FeiiYin/SPI/) - [T2M-GPT: Generating Human Motion from Textual Descriptions with Discrete Representations (CVPR 2023)](https://github.com/Mael-zys/T2M-GPT) ## Disclaimer This is not an official product of Tencent. ``` 1. Please carefully read and comply with the open-source license applicable to this code before using it. 2. Please carefully read and comply with the intellectual property declaration applicable to this code before using it. 3. This open-source code runs completely offline and does not collect any personal information or other data. If you use this code to provide services to end-users and collect related data, please take necessary compliance measures according to applicable laws and regulations (such as publishing privacy policies, adopting necessary data security strategies, etc.). If the collected data involves personal information, user consent must be obtained (if applicable). Any legal liabilities arising from this are unrelated to Tencent. 4. Without Tencent's written permission, you are not authorized to use the names or logos legally owned by Tencent, such as "Tencent." Otherwise, you may be liable for your legal responsibilities. 5. This open-source code does not have the ability to directly provide services to end-users. If you need to use this code for further model training or demos, as part of your product to provide services to end-users, or for similar use, please comply with applicable laws and regulations for your product or service. Any legal liabilities arising from this are unrelated to Tencent. 6. It is prohibited to use this open-source code for activities that harm the legitimate rights and interests of others (including but not limited to fraud, deception, infringement of others' portrait rights, reputation rights, etc.), or other behaviors that violate applicable laws and regulations or go against social ethics and good customs (including providing incorrect or false information, spreading pornographic, terrorist, and violent information, etc.). Otherwise, you may be liable for your legal responsibilities. ``` ## All Thanks To Our Contributors <a href="https://github.com/OpenTalker/video-retalking/graphs/contributors"> <img src="https://contrib.rocks/image?repo=OpenTalker/video-retalking" /> </a>
girayo/ppo-Pyramids
girayo
2024-05-24T11:09:47Z
3
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2024-05-24T11:09:44Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: girayo/ppo-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Mauerrr/ghgru
Mauerrr
2024-05-24T11:08:15Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-24T07:44:34Z
--- license: apache-2.0 ---
T3Q-LLM/T3Q-LLM2-sft1.6
T3Q-LLM
2024-05-24T10:58:01Z
41
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-24T10:28:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## Evaluation hf-causal-experimental (pretrained=T3Q-LLM/T3Q-LLM2-sft1.6,use_accelerate=true,trust_remote_code=true), limit: None, provide_description: False, num_fewshot: 0, batch_size: 8 | Task |Version| Metric |Value | |Stderr| |----------------|------:|--------|-----:|---|-----:| |kobest_boolq | 0|acc |0.9416|± |0.0063| | | |macro_f1|0.9415|± |0.0063| |kobest_copa | 0|acc |0.7690|± |0.0133| | | |macro_f1|0.7685|± |0.0134| |kobest_hellaswag| 0|acc |0.5160|± |0.0224| | | |acc_norm|0.5680|± |0.0222| | | |macro_f1|0.5132|± |0.0223| |kobest_sentineg | 0|acc |0.7154|± |0.0227| | | |macro_f1|0.6977|± |0.0237|
rbxo0128/distilbert-base-uncased-finetuned-emotion
rbxo0128
2024-05-24T10:57:28Z
119
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-24T10:52:52Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.922 - name: F1 type: f1 value: 0.9219202968700123 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2192 - Accuracy: 0.922 - F1: 0.9219 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8317 | 1.0 | 250 | 0.3238 | 0.9025 | 0.9017 | | 0.2506 | 2.0 | 500 | 0.2192 | 0.922 | 0.9219 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
tsavage68/MedQA_L3_300steps_1e6rate_05beta_CSFTDPO
tsavage68
2024-05-24T10:53:54Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "dpo", "generated_from_trainer", "conversational", "base_model:tsavage68/MedQA_L3_1000steps_1e6rate_SFT", "base_model:finetune:tsavage68/MedQA_L3_1000steps_1e6rate_SFT", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-24T10:49:52Z
--- license: llama3 base_model: tsavage68/MedQA_L3_1000steps_1e6rate_SFT tags: - trl - dpo - generated_from_trainer model-index: - name: MedQA_L3_300steps_1e6rate_05beta_CSFTDPO results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MedQA_L3_300steps_1e6rate_05beta_CSFTDPO This model is a fine-tuned version of [tsavage68/MedQA_L3_1000steps_1e6rate_SFT](https://huggingface.co/tsavage68/MedQA_L3_1000steps_1e6rate_SFT) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5731 - Rewards/chosen: 5.8136 - Rewards/rejected: 3.5872 - Rewards/accuracies: 0.7692 - Rewards/margins: 2.2264 - Logps/rejected: -26.6804 - Logps/chosen: -19.7013 - Logits/rejected: -0.8355 - Logits/chosen: -0.8339 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 300 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6909 | 0.0489 | 50 | 0.6059 | -0.4307 | -0.6542 | 0.7538 | 0.2235 | -35.1631 | -32.1898 | -0.7254 | -0.7246 | | 0.4343 | 0.0977 | 100 | 0.7202 | 6.9486 | 5.2431 | 0.6989 | 1.7054 | -23.3686 | -17.4314 | -0.7816 | -0.7804 | | 0.6114 | 0.1466 | 150 | 0.6428 | 3.8385 | 1.9433 | 0.7407 | 1.8951 | -29.9682 | -23.6516 | -0.8244 | -0.8232 | | 0.3522 | 0.1954 | 200 | 0.5948 | 5.1038 | 2.7837 | 0.7604 | 2.3201 | -28.2874 | -21.1208 | -0.8383 | -0.8367 | | 0.3837 | 0.2443 | 250 | 0.5746 | 5.7825 | 3.5643 | 0.7692 | 2.2182 | -26.7263 | -19.7636 | -0.8356 | -0.8340 | | 0.3658 | 0.2931 | 300 | 0.5731 | 5.8136 | 3.5872 | 0.7692 | 2.2264 | -26.6804 | -19.7013 | -0.8355 | -0.8339 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.0.0+cu117 - Datasets 2.19.1 - Tokenizers 0.19.1
abhicake/fine_bg_removal
abhicake
2024-05-24T10:46:59Z
96
1
transformers
[ "transformers", "pytorch", "onnx", "SegformerForSemanticSegmentation", "image-segmentation", "custom_code", "license:apache-2.0", "region:us" ]
image-segmentation
2024-05-24T09:40:40Z
--- license: apache-2.0 ---
ikmalalfaozi/layoutlmv3-finetuned-cord
ikmalalfaozi
2024-05-24T10:46:06Z
106
0
transformers
[ "transformers", "tensorboard", "safetensors", "layoutlmv3", "token-classification", "generated_from_trainer", "base_model:microsoft/layoutlmv3-base", "base_model:finetune:microsoft/layoutlmv3-base", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-05-24T08:27:20Z
--- license: cc-by-nc-sa-4.0 base_model: microsoft/layoutlmv3-base tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: layoutlmv3-finetuned-cord_100 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlmv3-finetuned-cord_100 This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1687 - Precision: 0.9382 - Recall: 0.9574 - F1: 0.9477 - Accuracy: 0.9597 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 5 - eval_batch_size: 5 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 2500 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.56 | 250 | 0.3730 | 0.8662 | 0.8708 | 0.8685 | 0.9042 | | 0.3943 | 3.12 | 500 | 0.2683 | 0.8939 | 0.9027 | 0.8983 | 0.9279 | | 0.3943 | 4.69 | 750 | 0.2232 | 0.9248 | 0.9339 | 0.9293 | 0.9474 | | 0.1559 | 6.25 | 1000 | 0.2129 | 0.9301 | 0.9407 | 0.9354 | 0.9504 | | 0.1559 | 7.81 | 1250 | 0.1782 | 0.9289 | 0.9529 | 0.9407 | 0.9563 | | 0.082 | 9.38 | 1500 | 0.1876 | 0.9327 | 0.9483 | 0.9405 | 0.9555 | | 0.082 | 10.94 | 1750 | 0.1746 | 0.9416 | 0.9559 | 0.9487 | 0.9606 | | 0.0486 | 12.5 | 2000 | 0.1848 | 0.9349 | 0.9498 | 0.9423 | 0.9550 | | 0.0486 | 14.06 | 2250 | 0.1739 | 0.9439 | 0.9590 | 0.9514 | 0.9623 | | 0.0351 | 15.62 | 2500 | 0.1687 | 0.9382 | 0.9574 | 0.9477 | 0.9597 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
stablediffusionapi/big
stablediffusionapi
2024-05-24T10:44:28Z
29
1
diffusers
[ "diffusers", "modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-05-24T10:41:11Z
--- license: creativeml-openrail-m tags: - modelslab.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # API Inference ![generated from modelslab.com](https://cdn2.stablediffusionapi.com/generations/bf190b5a-fe19-437c-ba05-82f29cb1f7ad-0.png) ## Get API Key Get API key from [ModelsLab API](http://modelslab.com), No Payment needed. Replace Key in below code, change **model_id** to "big" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com) Try model for free: [Generate Images](https://modelslab.com/models/big) Model link: [View model](https://modelslab.com/models/big) View all models: [View Models](https://modelslab.com/models) import requests import json url = "https://modelslab.com/api/v6/images/text2img" payload = json.dumps({ "key": "your_api_key", "model_id": "big", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
GautamR/whisper-tiny-hi_test
GautamR
2024-05-24T10:42:10Z
130
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-24T10:07:01Z
--- tags: - generated_from_trainer metrics: - wer model-index: - name: whisper-tiny-hi_test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-tiny-hi_test This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.3405 - Wer: 101.7544 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 4 - training_steps: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.3725 | 10.0 | 10 | 2.3405 | 101.7544 | | 1.5413 | 20.0 | 20 | 1.7254 | 263.1579 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
xinliu/w2v-bert-2.0-mongolian-colab-CV16.0
xinliu
2024-05-24T10:39:09Z
78
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2-bert", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_16_0", "base_model:facebook/w2v-bert-2.0", "base_model:finetune:facebook/w2v-bert-2.0", "license:mit", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-23T04:25:47Z
--- license: mit tags: - generated_from_trainer base_model: facebook/w2v-bert-2.0 datasets: - common_voice_16_0 metrics: - wer model-index: - name: w2v-bert-2.0-mongolian-colab-CV16.0 results: - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: common_voice_16_0 type: common_voice_16_0 config: mn split: test args: mn metrics: - type: wer value: 0.32733304328910157 name: Wer --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # w2v-bert-2.0-mongolian-colab-CV16.0 This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on the common_voice_16_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.5090 - Wer: 0.3273 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 1.8026 | 2.3715 | 300 | 0.6395 | 0.5274 | | 0.3561 | 4.7431 | 600 | 0.5804 | 0.4247 | | 0.1776 | 7.1146 | 900 | 0.5514 | 0.3697 | | 0.0764 | 9.4862 | 1200 | 0.5090 | 0.3273 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
PipableAI/pip-api-data-training-v1-epoch-2
PipableAI
2024-05-24T10:37:45Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T08:56:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
JawadC/camembert
JawadC
2024-05-24T10:37:15Z
1
1
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-24T10:04:01Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of Camembert cheese widget: - text: A piece of Camembert cheese on a rustic wooden table. output: url: image_0.png - text: A piece of Camembert cheese on a rustic wooden table. output: url: image_1.png - text: A piece of Camembert cheese on a rustic wooden table. output: url: image_2.png - text: A piece of Camembert cheese on a rustic wooden table. output: url: image_3.png --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - JawadC/camembert <Gallery /> ## Model description These are JawadC/camembert LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of Camembert cheese to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](JawadC/camembert/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
pmrster/llama3-8b-instruct-journal-finetune
pmrster
2024-05-24T10:36:32Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "region:us" ]
null
2024-05-24T10:33:58Z
--- license: llama3 library_name: peft tags: - generated_from_trainer base_model: meta-llama/Meta-Llama-3-8B-Instruct model-index: - name: llama3-8b-instruct-journal-finetune results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama3-8b-instruct-journal-finetune This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0299 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-------:|:----:|:---------------:| | 2.9002 | 2.0833 | 25 | 1.8777 | | 1.1645 | 4.1667 | 50 | 1.6214 | | 0.4078 | 6.25 | 75 | 1.7856 | | 0.2373 | 8.3333 | 100 | 1.8434 | | 0.2209 | 10.4167 | 125 | 1.7767 | | 0.1953 | 12.5 | 150 | 1.8293 | | 0.1755 | 14.5833 | 175 | 1.7663 | | 0.1893 | 16.6667 | 200 | 1.8726 | | 0.1621 | 18.75 | 225 | 1.9366 | | 0.1657 | 20.8333 | 250 | 1.9146 | | 0.1593 | 22.9167 | 275 | 1.9225 | | 0.156 | 25.0 | 300 | 1.9411 | | 0.1549 | 27.0833 | 325 | 1.9504 | | 0.1525 | 29.1667 | 350 | 1.9608 | | 0.1511 | 31.25 | 375 | 1.9924 | | 0.1494 | 33.3333 | 400 | 1.9878 | | 0.1488 | 35.4167 | 425 | 2.0089 | | 0.1479 | 37.5 | 450 | 2.0089 | | 0.1448 | 39.5833 | 475 | 2.0233 | | 0.1447 | 41.6667 | 500 | 2.0299 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
twwch/mt5-base-summary
twwch
2024-05-24T10:36:17Z
114
1
transformers
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "summarization", "zh", "dataset:twwch/summary", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2023-10-26T06:27:58Z
--- license: apache-2.0 datasets: - twwch/summary language: - zh pipeline_tag: summarization --- ```python import torch from transformers import T5ForConditionalGeneration, T5Tokenizer model_path = "twwch/mt5-base-summary" model = T5ForConditionalGeneration.from_pretrained(model_path) tokenizer = T5Tokenizer.from_pretrained(model_path) device = torch.device('cuda:0') if torch.cuda.is_available() else torch.device('cpu') model.to(device) model.eval() text = """ 什么是Nginx Nginx是一个开源的高性能HTTP和反向代理服务器。它可以用于处理静态资源、负载均衡、反向代理和缓存等任务。Nginx被广泛用于构建高可用性、高性能的Web应用程序和网站。它具有低内存消耗、高并发能力和良好的稳定性,因此在互联网领域非常受欢迎。 为什么使用Nginx 高性能:Nginx采用事件驱动的异步架构,能够处理大量并发连接而不会消耗过多的系统资源。它的处理能力比传统的Web服务器更高,在高并发负载下表现出色。 高可靠性:Nginx具有强大的容错能力和稳定性,能够在面对高流量和DDoS攻击等异常情况下保持可靠运行。它能通过健康检查和自动故障转移来保证服务的可用性。 负载均衡:Nginx可以作为反向代理服务器,实现负载均衡,将请求均匀分发给多个后端服务器。这样可以提高系统的整体性能和可用性。 静态文件服务:Nginx对静态资源(如HTML、CSS、JavaScript、图片等)的处理非常高效。它可以直接缓存静态文件,减轻后端服务器的负载。 扩展性:Nginx支持丰富的模块化扩展,可以通过添加第三方模块来提供额外的功能,如gzip压缩、SSL/TLS加密、缓存控制等。 如何处理请求 Nginx处理请求的基本流程如下: 接收请求:Nginx作为服务器软件监听指定的端口,接收客户端发来的请求。 解析请求:Nginx解析请求的内容,包括请求方法(GET、POST等)、URL、头部信息等。 配置匹配:Nginx根据配置文件中的规则和匹配条件,决定如何处理该请求。配置文件定义了虚拟主机、反向代理、负载均衡、缓存等特定的处理方式。 处理请求:Nginx根据配置的处理方式,可能会进行以下操作: 静态文件服务:如果请求的是静态资源文件,如HTML、CSS、JavaScript、图片等,Nginx可以直接返回文件内容,不必经过后端应用程序。 反向代理:如果配置了反向代理,Nginx将请求转发给后端的应用服务器,然后将其响应返回给客户端。这样可以提供负载均衡、高可用性和缓存等功能。 缓存:如果启用了缓存,Nginx可以缓存一些静态或动态内容的响应,在后续相同的请求中直接返回缓存的响应,减少后端负载并提高响应速度。 URL重写:Nginx可以根据配置的规则对URL进行重写,将请求从一个URL重定向到另一个URL或进行转换。 SSL/TLS加密:如果启用了SSL/TLS,Nginx可以负责加密和解密HTTPS请求和响应。 访问控制:Nginx可以根据配置的规则对请求进行访问控制,例如限制IP访问、进行身份认证等。 响应结果:Nginx根据处理结果生成响应报文,包括状态码、头部信息和响应内容。然后将响应发送给客户端。 """ def _split_text(text, length): chunks = [] start = 0 while start < len(text): if len(text) - start > length: pos_forward = start + length pos_backward = start + length pos = start + length while (pos_forward < len(text)) and (pos_backward >= 0) and (pos_forward < 20 + pos) and ( pos_backward + 20 > pos) and text[pos_forward] not in {'.', '。', ',', ','} and text[ pos_backward] not in {'.', '。', ',', ','}: pos_forward += 1 pos_backward -= 1 if pos_forward - pos >= 20 and pos_backward <= pos - 20: pos = start + length elif text[pos_backward] in {'.', '。', ',', ','}: pos = pos_backward else: pos = pos_forward chunks.append(text[start:pos + 1]) start = pos + 1 else: chunks.append(text[start:]) break # Combine last chunk with previous one if it's too short if len(chunks) > 1 and len(chunks[-1]) < 100: chunks[-2] += chunks[-1] chunks.pop() return chunks def summary(text): chunks = _split_text(text, 300) chunks = [ "summarize: " + chunk for chunk in chunks ] input_ids = tokenizer(chunks, return_tensors="pt", max_length=512, padding=True, truncation=True).input_ids.to(device) outputs = model.generate(input_ids, max_length=250, num_beams=4, no_repeat_ngram_size=2) tokens = outputs.tolist() output_text = [ tokenizer.decode(tokens[i], skip_special_tokens=True) for i in range(len(tokens)) ] for i in range(len(output_text)): print(output_text[i]) summary(text) ``` 输出: ``` 段落内容Nginx是一个开源的高性能HTTP和反向代理服务器,可以用于处理静态资源、负载均衡、反反代理和缓存等任务。它被广泛用于构建高可用性、高性能的Web应用程序和网站,具有低内存消耗、高并发能力和良好的稳定性,因此在互联网领域非常受欢迎。高性能和高可靠性相比传统的Web服务器更高,在高并且发负担下表现出色。高稳定性和容错能力,能够在面对高流量和DDoS攻击等异常情况下保持可靠运行。 段落内容Nginx处理请求的基本流程,包括负载均衡、静态文件服务、扩展性、如何解决请求的流程和如何处理。其中包括接收请求和解析请求,以及对客户端发来的请求进行解析。 段落内容Nginx的配置匹配和处理请求。配置文件定义了虚拟主机、反向代理、负载均衡、缓存等特定的处理方式,并根据配置进行静态文件服务和反面信息处理的操作。通过调用静存来实现高可用性,并且可以提供高可性和缓储等功能。 段落内容主要涉及到缓存静态或动态内容的响应,包括URL重写、SSL/TLS加密、访问控制、响应结果生成和发送给客户端等功能。Nginx可以根据配置的规则对URL进行重写作,将请求从一个URL轻定向到另一个URL或进行转换。 综上所述,Nginx的缓解和响应速度可以快速提高。 ```
hgnoi/Hbvr7RzCp5AyK6aC
hgnoi
2024-05-24T10:36:03Z
133
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-24T10:34:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
uygarkurt/llama3s-merged-linear
uygarkurt
2024-05-24T10:34:19Z
91
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-08T16:23:15Z
--- library_name: transformers license: mit --- # LLM Model Merging ## YouTube Tutorial <div align="center"> <a href="https://youtu.be/gNXBp3wttFU">Model Merging: Merge LLMs to Create Frankestein Models - Python, HuggingFace, Mergekit</a> <br> <br> <a href="https://youtu.be/gNXBp3wttFU"> <img src="./thumbnail1-button.png" height="85%" width="85%%"/> </a> </div> ## GitHub You can find the GitHub from here; https://github.com/uygarkurt/Model-Merge In this specific case, I typed `llama-3` into the open LLM leaderboard, took the best 3 models, merged them and created a better ranking model wihtout any training. As the main libraries we will be using [mergekit](https://github.com/arcee-ai/mergekit). <br/> <div align="center"> <a href=""> <img alt="open-source-image" src="https://img.shields.io/badge/%E2%9D%A4%EF%B8%8F_Open_Source-%2350C878?style=for-the-badge"/> </a> <a href="https://youtu.be/gNXBp3wttFU"> <img alt="youtube-tutorial" src="https://img.shields.io/badge/YouTube_Tutorial-grey?style=for-the-badge&logo=YouTube&logoColor=%23FF0000"/> </a> <a href="https://github.com/uygarkurt/Model-Merge"> <img alt="github-image" src="https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white" </a> </div> <br/>
federicocosta1989/wav2vec2-base-cv-demo-colab
federicocosta1989
2024-05-24T10:31:33Z
108
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_13_0", "base_model:facebook/wav2vec2-base", "base_model:finetune:facebook/wav2vec2-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-23T13:23:26Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: facebook/wav2vec2-base datasets: - common_voice_13_0 model-index: - name: wav2vec2-base-cv-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-cv-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the common_voice_13_0 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
mpachha/mpachha
mpachha
2024-05-24T10:26:08Z
0
0
peft
[ "peft", "safetensors", "llama", "trl", "sft", "generated_from_trainer", "dataset:samsum", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:adapter:meta-llama/Meta-Llama-3-8B", "license:llama3", "region:us" ]
null
2024-05-23T12:10:37Z
--- license: llama3 library_name: peft tags: - trl - sft - generated_from_trainer base_model: meta-llama/Meta-Llama-3-8B datasets: - samsum model-index: - name: Meta-Llama-3-ft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Meta-Llama-3-ft This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the samsum dataset. It achieves the following results on the evaluation set: - Loss: 2.0490 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 5 - total_train_batch_size: 10 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - training_steps: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.0323 | 0.07 | 100 | 2.0490 | ### Framework versions - PEFT 0.11.1 - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
Alphacode-AI/Alphacode-MALI-11B
Alphacode-AI
2024-05-24T10:25:01Z
2,246
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "conversational", "ko", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-20T05:50:57Z
--- license: cc-by-4.0 language: - ko pipeline_tag: text-generation tags: - merge --- ![alphacode](logo.png) ![mali](Alphacode_MALI.jpeg) MALI-11B (Model with Auto Learning Ideation) is a merge version of Alphacode's Models that has been fine-tuned with Our In House CustomData. Train Spec : We utilized an A100x8 for training our model with DeepSpeed / HuggingFace TRL Trainer / HuggingFace Accelerate Contact : Alphacode Co. [https://alphacode.ai/]
hiren838/tinyllama-2
hiren838
2024-05-24T10:22:58Z
147
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-10T05:38:28Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Niggendar/duchaitenPonyXLNo_ponyNoScoreV10
Niggendar
2024-05-24T10:21:47Z
82
3
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-05-24T10:13:41Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
falan42/gemma-SODA-mark1.1_Finetune
falan42
2024-05-24T10:21:46Z
76
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-24T10:10:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lainshower/Llama3-8b-orca-ep3
lainshower
2024-05-24T10:19:11Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-24T09:06:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
alkav/Mistral-7B-Instruct-v0.2-finetuned-customer-feedback
alkav
2024-05-24T10:09:38Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-06T12:10:07Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ihebMissaoui/layoutlm_v1_base_ft_funsd_kie_LoRa_220epochs
ihebMissaoui
2024-05-24T09:55:29Z
159
0
transformers
[ "transformers", "safetensors", "layoutlm", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-05-24T09:05:06Z
--- library_name: transformers metrics: - f1 --- # Model Card for Model ID I used LoRa fine tuning with r=16 , trainable params: 702,733 || all params: 113,237,786 and achieves this metrics {'loss': 2.2948044538497925, 'precision': 0.7458487084870848, 'recall': 0.8089044522261131, 'f1': 0.7760979121670266} <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
alfiyaanware/tinyllama-DTC-code-v0
alfiyaanware
2024-05-24T09:53:45Z
148
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-24T09:21:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ryan0712/llama-3-8b-slow-DUS-max-layer1-method2
ryan0712
2024-05-24T09:52:47Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "NousResearch/Meta-Llama-3-8B", "base_model:NousResearch/Meta-Llama-3-8B", "base_model:finetune:NousResearch/Meta-Llama-3-8B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-24T09:50:16Z
--- tags: - merge - mergekit - lazymergekit - NousResearch/Meta-Llama-3-8B base_model: - NousResearch/Meta-Llama-3-8B - NousResearch/Meta-Llama-3-8B - NousResearch/Meta-Llama-3-8B - NousResearch/Meta-Llama-3-8B - NousResearch/Meta-Llama-3-8B - NousResearch/Meta-Llama-3-8B - NousResearch/Meta-Llama-3-8B - NousResearch/Meta-Llama-3-8B - NousResearch/Meta-Llama-3-8B - NousResearch/Meta-Llama-3-8B - NousResearch/Meta-Llama-3-8B - NousResearch/Meta-Llama-3-8B - NousResearch/Meta-Llama-3-8B - NousResearch/Meta-Llama-3-8B - NousResearch/Meta-Llama-3-8B - NousResearch/Meta-Llama-3-8B --- # llama-3-8b-slow-DUS-layer1-method2 llama-3-8b-slow-DUS-layer1-method2 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) ## 🧩 Configuration ```yaml slices: - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [0, 1] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [1, 2] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [2, 3] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [3, 4] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [4, 5] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [5, 6] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [6, 7] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [7, 8] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [8, 9] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [9, 10] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [10, 11] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [11, 12] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [12, 13] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [13, 14] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [14, 15] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [30, 31] merge_method: passthrough dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "ryan0712/llama-3-8b-slow-DUS-layer1-method2" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
lainshower/Llama2-13b-orca-ep3
lainshower
2024-05-24T09:51:52Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-24T09:05:43Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
blackhole33/uzbek-speaker-verification-v9
blackhole33
2024-05-24T09:50:51Z
0
1
nemo
[ "nemo", "pytorch", "NeMo", "license:cc-by-4.0", "region:us" ]
null
2024-05-24T09:50:42Z
--- license: cc-by-4.0 library_name: nemo tags: - pytorch - NeMo --- # Uzbek-speaker-verification-v9 <style> img { display: inline; } </style> [![Model architecture](https://img.shields.io/badge/Model_Arch-PUT-YOUR-ARCHITECTURE-HERE-lightgrey#model-badge)](#model-architecture) | [![Model size](https://img.shields.io/badge/Params-PUT-YOUR-MODEL-SIZE-HERE-lightgrey#model-badge)](#model-architecture) | [![Language](https://img.shields.io/badge/Language-PUT-YOUR-LANGUAGE-HERE-lightgrey#model-badge)](#datasets) **Put a short model description here.** See the [model architecture](#model-architecture) section and [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/index.html) for complete architecture details. ## NVIDIA NeMo: Training To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest Pytorch version. ``` pip install nemo_toolkit['all'] ``` ## How to Use this Model The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset. ### Automatically instantiate the model **NOTE**: Please update the model class below to match the class of the model being uploaded. ```python import nemo.core import ModelPT model = ModelPT.from_pretrained("ai-nightcoder/uzbek-speaker-verification-v9") ``` ### NOTE Add some information about how to use the model here. An example is provided for ASR inference below. ### Transcribing using Python First, let's get a sample ``` wget https://dldata-public.s3.us-east-2.amazonaws.com/2086-149220-0033.wav ``` Then simply do: ``` asr_model.transcribe(['2086-149220-0033.wav']) ``` ### Transcribing many audio files ```shell python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py pretrained_name="ai-nightcoder/uzbek-speaker-verification-v9" audio_dir="" ``` ### Input **Add some information about what are the inputs to this model** ### Output **Add some information about what are the outputs of this model** ## Model Architecture **Add information here discussing architectural details of the model or any comments to users about the model.** ## Training **Add information here about how the model was trained. It should be as detailed as possible, potentially including the the link to the script used to train as well as the base config used to train the model. If extraneous scripts are used to prepare the components of the model, please include them here.** ### NOTE An example is provided below for ASR The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_transducer/speech_to_text_rnnt_bpe.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/fastconformer/fast-conformer_transducer_bpe.yaml). The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py). ### Datasets **Try to provide as detailed a list of datasets as possible. If possible, provide links to the datasets on HF by adding it to the manifest section at the top of the README (marked by ---).** ### NOTE An example for the manifest section is provided below for ASR datasets datasets: - librispeech_asr - fisher_corpus - Switchboard-1 - WSJ-0 - WSJ-1 - National-Singapore-Corpus-Part-1 - National-Singapore-Corpus-Part-6 - vctk - voxpopuli - europarl - multilingual_librispeech - mozilla-foundation/common_voice_8_0 - MLCommons/peoples_speech The corresponding text in this section for those datasets is stated below - The model was trained on 64K hours of English speech collected and prepared by NVIDIA NeMo and Suno teams. The training dataset consists of private subset with 40K hours of English speech plus 24K hours from the following public datasets: - Librispeech 960 hours of English speech - Fisher Corpus - Switchboard-1 Dataset - WSJ-0 and WSJ-1 - National Speech Corpus (Part 1, Part 6) - VCTK - VoxPopuli (EN) - Europarl-ASR (EN) - Multilingual Librispeech (MLS EN) - 2,000 hour subset - Mozilla Common Voice (v7.0) - People's Speech - 12,000 hour subset ## Performance **Add information here about the performance of the model. Discuss what is the metric that is being used to evaluate the model and if there are external links explaning the custom metric, please link to it. ### NOTE An example is provided below for ASR metrics list that can be added to the top of the README model-index: - name: PUT_MODEL_NAME results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: AMI (Meetings test) type: edinburghcstr/ami config: ihm split: test args: language: en metrics: - name: Test WER type: wer value: 17.10 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Earnings-22 type: revdotcom/earnings22 split: test args: language: en metrics: - name: Test WER type: wer value: 14.11 Provide any caveats about the results presented in the top of the discussion so that nuance is not lost. It should ideally be in a tabular format (you can use the following website to make your tables in markdown format - https://www.tablesgenerator.com/markdown_tables)** ## Limitations **Discuss any practical limitations to the model when being used in real world cases. They can also be legal disclaimers, or discussion regarding the safety of the model (particularly in the case of LLMs).** ### Note An example is provided below Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech. ## License License to use this model is covered by the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/). By downloading the public and release version of the model, you accept the terms and conditions of the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license. ## References **Provide appropriate references in the markdown link format below. Please order them numerically.** [1] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
sadhaklal/linear-regression-geron-time-series
sadhaklal
2024-05-24T09:50:50Z
22
1
pytorch
[ "pytorch", "safetensors", "pytorch_model_hub_mixin", "model_hub_mixin", "time-series-forecasting", "region:us" ]
time-series-forecasting
2024-05-23T15:57:30Z
--- tags: - pytorch_model_hub_mixin - model_hub_mixin metrics: - mae library_name: pytorch pipeline_tag: time-series-forecasting --- # linear-regression-geron-time-series A linear regression model trained on Aurelien Geron's synthetic time series dataset - from Lesson 8 of the Udacity course "Intro to TensorFlow for Deep Learning". It is a PyTorch adaptation of the TensorFlow model trained in the lesson. The time series contains a linear trend, a yearly seasonality and some white noise. The linear regression model takes windows of 30 time steps of the series as input, and predicts the next time step of the series as output. Code: https://github.com/sambitmukherjee/handson-ml3-pytorch/blob/main/bonus/udacity/Intro_to_TF_for_DL/Lesson_8_A.ipynb Experiment tracking: https://wandb.ai/sadhaklal/linear-regression-geron-time-series ## Usage ``` import numpy as np # Create synthetic time series: def trend(time, slope=0): return slope * time def seasonal_pattern(season_time): """Just an arbitrary pattern; you can change it if you wish.""" return np.where(season_time < 0.4, np.cos(season_time * 2 * np.pi), 1 / np.exp(3 * season_time)) def seasonality(time, period, amplitude=1, phase=0): """Repeats the same pattern in each period.""" season_time = ((time + phase) % period) / period return amplitude * seasonal_pattern(season_time) def white_noise(time, noise_level=1, seed=None): return np.random.RandomState(seed).randn(len(time)) * noise_level time = np.arange(4 * 365 + 1) # 1 is added since every four years there is a leap year. baseline = 10 slope = 0.05 amplitude = 50 series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude) noise_level = 5 noise = white_noise(time, noise_level, seed=42) series += noise # Define training & validation periods: split_time = 1000 x_train = series[:split_time] x_valid = series[split_time:] # Create custom `Dataset` class; it'll be used by the `model_forecast` function below: import torch from torch.utils.data import Dataset, DataLoader class WindowDataset(Dataset): def __init__(self, series, window_size): window_size += 1 self.windows = [] for i in range(0, len(series) - window_size + 1, 1): self.windows.append(series[i:i + window_size]) def __len__(self): return len(self.windows) def __getitem__(self, idx): window = self.windows[idx] return window[:-1], window[-1] # Re-create model & load weights: import torch.nn as nn from huggingface_hub import PyTorchModelHubMixin device = torch.device("cpu") class LinearModel(nn.Module, PyTorchModelHubMixin): def __init__(self, window_size): super().__init__() self.linear = nn.Linear(window_size, 1) def forward(self, x): return self.linear(x) window_size = 30 model = LinearModel.from_pretrained("sadhaklal/linear-regression-geron-time-series", window_size=window_size) model.to(device) # Forecast on validation period: def model_forecast(model, series): series = torch.tensor(series, dtype=torch.float32) ds = WindowDataset(series, window_size) dl = DataLoader(ds, batch_size=32, shuffle=False) forecast = [] for x_batch, y_batch in dl: x_batch, y_batch = x_batch.to(device), y_batch.to(device) with torch.no_grad(): preds = model(x_batch) forecast.append(preds.squeeze()) forecast = torch.cat(forecast) return forecast.cpu().numpy() linear_forecast = model_forecast(model, series[split_time - window_size:]) ``` ## Metric MAE on the validation period: 4.99 --- This model has been pushed to the Hub using the [PyTorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration.
lainshower/Llama2-13b-orca-ep2
lainshower
2024-05-24T09:44:50Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-24T08:34:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
THUDM/cogvlm2-llama3-chat-19B-int4
THUDM
2024-05-24T09:43:10Z
1,247
28
transformers
[ "transformers", "pytorch", "text-generation", "chat", "cogvlm2", "conversational", "custom_code", "en", "arxiv:2311.03079", "license:other", "autotrain_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-24T06:57:53Z
--- license: other license_name: cogvlm2 license_link: https://huggingface.co/THUDM/cogvlm2-llama3-chat-19B-int4/blob/main/LICENSE language: - en pipeline_tag: text-generation tags: - chat - cogvlm2 inference: false --- # CogVLM2 <div align="center"> <img src=https://raw.githubusercontent.com/THUDM/CogVLM2/53d5d5ea1aa8d535edffc0d15e31685bac40f878/resources/logo.svg width="40%"/> </div> <p align="center"> 👋 <a href="resources/WECHAT.md" target="_blank">Wechat</a> · 💡<a href="http://36.103.203.44:7861/" target="_blank">Online Demo</a> · 🎈<a href="https://github.com/THUDM/CogVLM2" target="_blank">Github Page</a> </p> <p align="center"> 📍Experience the larger-scale CogVLM model on the <a href="https://open.bigmodel.cn/dev/api#glm-4v">ZhipuAI Open Platform</a>. </p> ## Model introduction We launch a new generation of **CogVLM2** series of models and open source two models built with [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct). Compared with the previous generation of CogVLM open source models, the CogVLM2 series of open source models have the following improvements: 1. Significant improvements in many benchmarks such as `TextVQA`, `DocVQA`. 2. Support **8K** content length. 3. Support image resolution up to **1344 * 1344**. 4. Provide an open source model version that supports both **Chinese and English**. CogVlM2 Int4 model requires 16G GPU memory and Must be run on Linux with Nvidia GPU. | Model name | cogvlm2-llama3-chat-19B-int4 | cogvlm2-llama3-chat-19B | |---------------------|------------------------------|-------------------------| | GPU Memory Required | 16G | 42G | | System Required | Linux (With Nvidia GPU) | Linux (With Nvidia GPU) | ## Benchmark Our open source models have achieved good results in many lists compared to the previous generation of CogVLM open source models. Its excellent performance can compete with some non-open source models, as shown in the table below: | Model | Open Source | LLM Size | TextVQA | DocVQA | ChartQA | OCRbench | MMMU | MMVet | MMBench | |--------------------------------|-------------|----------|----------|----------|----------|----------|----------|----------|----------| | CogVLM1.1 | ✅ | 7B | 69.7 | - | 68.3 | 590 | 37.3 | 52.0 | 65.8 | | LLaVA-1.5 | ✅ | 13B | 61.3 | - | - | 337 | 37.0 | 35.4 | 67.7 | | Mini-Gemini | ✅ | 34B | 74.1 | - | - | - | 48.0 | 59.3 | 80.6 | | LLaVA-NeXT-LLaMA3 | ✅ | 8B | - | 78.2 | 69.5 | - | 41.7 | - | 72.1 | | LLaVA-NeXT-110B | ✅ | 110B | - | 85.7 | 79.7 | - | 49.1 | - | 80.5 | | InternVL-1.5 | ✅ | 20B | 80.6 | 90.9 | **83.8** | 720 | 46.8 | 55.4 | **82.3** | | QwenVL-Plus | ❌ | - | 78.9 | 91.4 | 78.1 | 726 | 51.4 | 55.7 | 67.0 | | Claude3-Opus | ❌ | - | - | 89.3 | 80.8 | 694 | **59.4** | 51.7 | 63.3 | | Gemini Pro 1.5 | ❌ | - | 73.5 | 86.5 | 81.3 | - | 58.5 | - | - | | GPT-4V | ❌ | - | 78.0 | 88.4 | 78.5 | 656 | 56.8 | **67.7** | 75.0 | | CogVLM2-LLaMA3 (Ours) | ✅ | 8B | 84.2 | **92.3** | 81.0 | 756 | 44.3 | 60.4 | 80.5 | | CogVLM2-LLaMA3-Chinese (Ours) | ✅ | 8B | **85.0** | 88.4 | 74.7 | **780** | 42.8 | 60.5 | 78.9 | All reviews were obtained without using any external OCR tools ("pixel only"). ## Quick Start here is a simple example of how to use the model to chat with the CogVLM2 model. For More use case. Find in our [github](https://github.com/THUDM/CogVLM2) ```python import torch from PIL import Image from transformers import AutoModelForCausalLM, AutoTokenizer MODEL_PATH = "THUDM/cogvlm2-llama3-chat-19B-int4" DEVICE = 'cuda' if torch.cuda.is_available() else 'cpu' TORCH_TYPE = torch.bfloat16 if torch.cuda.is_available() and torch.cuda.get_device_capability()[ 0] >= 8 else torch.float16 tokenizer = AutoTokenizer.from_pretrained( MODEL_PATH, trust_remote_code=True ) model = AutoModelForCausalLM.from_pretrained( MODEL_PATH, torch_dtype=TORCH_TYPE, trust_remote_code=True, low_cpu_mem_usage=True, ).eval() text_only_template = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {} ASSISTANT:" while True: image_path = input("image path >>>>> ") if image_path == '': print('You did not enter image path, the following will be a plain text conversation.') image = None text_only_first_query = True else: image = Image.open(image_path).convert('RGB') history = [] while True: query = input("Human:") if query == "clear": break if image is None: if text_only_first_query: query = text_only_template.format(query) text_only_first_query = False else: old_prompt = '' for _, (old_query, response) in enumerate(history): old_prompt += old_query + " " + response + "\n" query = old_prompt + "USER: {} ASSISTANT:".format(query) if image is None: input_by_model = model.build_conversation_input_ids( tokenizer, query=query, history=history, template_version='chat' ) else: input_by_model = model.build_conversation_input_ids( tokenizer, query=query, history=history, images=[image], template_version='chat' ) inputs = { 'input_ids': input_by_model['input_ids'].unsqueeze(0).to(DEVICE), 'token_type_ids': input_by_model['token_type_ids'].unsqueeze(0).to(DEVICE), 'attention_mask': input_by_model['attention_mask'].unsqueeze(0).to(DEVICE), 'images': [[input_by_model['images'][0].to(DEVICE).to(TORCH_TYPE)]] if image is not None else None, } gen_kwargs = { "max_new_tokens": 2048, "pad_token_id": 128002, } with torch.no_grad(): outputs = model.generate(**inputs, **gen_kwargs) outputs = outputs[:, inputs['input_ids'].shape[1]:] response = tokenizer.decode(outputs[0]) response = response.split("<|end_of_text|>")[0] print("\nCogVLM2:", response) history.append((query, response)) ``` ## License This model is released under the CogVLM2 [LICENSE](LICENSE). For models built with Meta Llama 3, please also adhere to the [LLAMA3_LICENSE](LLAMA3_LICENSE). ## Citation If you find our work helpful, please consider citing the following papers ``` @misc{wang2023cogvlm, title={CogVLM: Visual Expert for Pretrained Language Models}, author={Weihan Wang and Qingsong Lv and Wenmeng Yu and Wenyi Hong and Ji Qi and Yan Wang and Junhui Ji and Zhuoyi Yang and Lei Zhao and Xixuan Song and Jiazheng Xu and Bin Xu and Juanzi Li and Yuxiao Dong and Ming Ding and Jie Tang}, year={2023}, eprint={2311.03079}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
Xwin-LM/Xwin-Math-7B-V1.1
Xwin-LM
2024-05-24T09:37:51Z
48
8
transformers
[ "transformers", "pytorch", "llama", "text-generation", "arxiv:2403.04706", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-06T12:25:36Z
--- license: llama2 --- # Xwin-Math <p align="center"> <a href="https://github.com/Xwin-LM/Xwin-LM/tree/main/Xwin-Math"><img src="https://img.shields.io/badge/GitHub-yellow.svg?style=social&logo=github"></a> <a href="https://huggingface.co/Xwin-LM"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue"></a> </p> [Paper Link](https://arxiv.org/pdf/2403.04706) Xwin-Math is a series of powerful SFT LLMs for math problems based on LLaMA-2. ## 🔥 News - 💥 [May, 2024] The [Xwin-Math-70B-V1.1](https://huggingface.co/Xwin-LM/Xwin-Math-70B-V1.1) model achieves **51.9 pass@1 on the MATH benchmark** and **90.6 pass@1 on the GSM8K benchmark**. This is a new SoTA model based on LLaMA-2-70B! - 💥 [May, 2024] The [Xwin-Math-7B-V1.1](https://huggingface.co/Xwin-LM/Xwin-Math-7B-V1.1) model achieves **44.7 pass@1 on the MATH benchmark** and **84.4 pass@1 on the GSM8K benchmark**. This is a new SoTA model based on LLaMA-2-7B! - 💥 [Nov, 2023] The [Xwin-Math-70B-V1.0](https://huggingface.co/Xwin-LM/Xwin-Math-70B-V1.0) model achieves **31.8 pass@1 on the MATH benchmark** and **87.0 pass@1 on the GSM8K benchmark**. This performance places it first amongst all open-source models! - 💥 [Nov, 2023] The [Xwin-Math-7B-V1.0](https://huggingface.co/Xwin-LM/Xwin-Math-7B-V1.0) and [Xwin-Math-13B-V1.0](https://huggingface.co/Xwin-LM/Xwin-Math-13B-V1.0) models achieve **66.6 and 76.2 pass@1 on the GSM8K benchmark**, ranking as top-1 among all LLaMA-2 based 7B and 13B open-source models respectively! ## ✨ Model Card | Model | GSM8K | MATH | Checkpoint | License | |:-:|:-:|:-:|:-:|:-:| |Xwin-Math-7B-V1.0 | 66.6 | 17.4 | 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-Math-7B-V1.0" target="_blank">HF Link</a> | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License| |Xwin-Math-7B-V1.1 | 84.4 | 44.7 | 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-Math-7B-V1.1" target="_blank">HF Link</a> | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License| |Xwin-Math-13B-V1.0| 76.2 | 21.7 | 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-Math-13B-V1.0" target="_blank">HF Link</a> | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License| |Xwin-Math-70B-V1.0| 87.0 | 31.8 | 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-Math-70B-V1.0" target="_blank">HF Link</a> | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License| |Xwin-Math-70B-V1.1| 90.6 | 51.9 | 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-Math-70B-V1.1" target="_blank">HF Link</a> | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License| * Xwin-Math-7B-V1.1 uses 1.92M GSM8K and 960K MATH synthetic data * Xwin-Math-70B-V1.1 uses 960K GSM8K and 480K MATH synthetic data ## 🚀 Benchmarks ### Xwin-Math performance on [MATH](https://github.com/hendrycks/math) and [GSM8K](https://github.com/openai/grade-school-math). Xwin-Math-70B-V1.0 has achieved **31.8% on MATH** and **87.0% on GSM8K**. These scores are **5.3** and **3.1** points higher, respectively, than the previous state-of-the-art open-source MetaMath and LEMAv1 model. | **Model** |**MATH (Our test)** | **GSM8K (Our test)** | |:-:|:-:|:-:| | GPT-4 (zero-shot) | 52.4 | 94.8 | | GPT-35-Turbo (8-shot)| 37.1 | 81.0 | | | | WizardMath-70B | 23.9 | 81.1 | | MAmmoTH-70B | 20.8 | 72.6 | | MetaMath-70B | 26.5 | 82.0 | | LEMAv1-70B | 25.9 | 83.9 | |**Xwin-Math-70B-V1.0** |**31.8**|**87.0**| |**Xwin-Math-70B-V1.1** |**51.9**|**90.6**| | | | WizardMath-13B | 15.0 | 63.7 | | MAmmoTH-13B | 12.3 | 56.2 | | MetaMath-13B | 22.7 | 70.9 | | LEMAv1-13B | 13.6 | 65.0 | |**Xwin-Math-13B-V1.0** | 21.7 | 76.2 | | | | WizardMath-7B | 10.9 | 55.0 | | MAmmoTH-7B | 9.6 | 50.2 | | MetaMath-7B | 20.1 | 66.6 | | LEMAv1-7B | 10.0 | 54.7 | |**Xwin-Math-7B-V1.0** | 17.4 | 66.6 | |**Xwin-Math-7B-V1.1** | 44.7 | 84.4 | We obtain these results using our flexible evaluation strategy. Due to differences in environment and hardware, the test results may be slightly different from the report, but we ensure that the evaluation is as accurate and fair as possible. ### Xwin-Math performance on other math benchmarks. Our 70B model shows strong mathematical reasoning capabilities among all open-sourced models. Also note that our model even approaches or surpasses the performance of GPT-35-Turbo on some benchmarks. | **Model** | SVAMP | ASDiv | NumGlue | Algebra | MAWPS | **Average** | |:-:|:-:|:-:|:-:|:-:|:-:|:-:| | GPT-35-Turbo (8-shot)| 80.6 | 84.1 | 81.8 | 90.5 | 91.7 | 85.7 | | | | WizardMath-70B | 80.2 | 75.8 | 71.4 | 64.0 | 74.9 | 73.3 | | MAmmoTH-70B | 71.2 | 73.9 | 62.7 | 58.1 | 72.2 | 67.6 | | MetaMath-70B | 85.8 | 81.1 | 77.5 | 79.7 | 81.4 | 81.1 | | LEMAv1-70B-MATH * | 81.6 | 77.1 | 72.1 | 69.4 | 81.8 | 76.5 | |**Xwin-Math-70B-V1.0** | 84.0 | 84.1 | 81.3 | 78.4 | 90.8 | 83.7 | \* LEMAv1 has two models, and we report the better LEMAv1-70B-MATH model in these benchmarks. ## 🔨 Evaluation In order to evaluate a model's mathematical capabilities more flexibly and ensure a fair comparison of results, particularly for the MATH benchmark, we have developed a new evaluation tool. We have also assessed the pass@1 results of recent models on MATH and GSM8K benchmarks, which provides more accurate results. We hope this toolkit can benefit open-source community by providing more accurate insights and conclusions. For a deeper understanding of our evaluation tool and methods, please visit [here](https://github.com/Xwin-LM/Xwin-LM/tree/main/Xwin-Math/eval) * "Report" refers to the accuracy stated in the original papers. * "Repro" indicates the results is reproduced by generating responses and evaluating them using the respective open-source models and scripts. * "Strict" and "Flex" denote the results we achieved by employing our two strategies to extract answer and evaluate the same responses as "Repro". | Model | MATH <br> (Report) <br/> |MATH <br> (Repro) <br/> | MATH <br> (Strict) <br/> |MATH <br> (Flex) <br/> | GSM8K <br> (Report) <br/> |GSM8K <br> (Repro) <br/>| GSM8K <br> (Strict) <br/> | GSM8K <br> (Report) <br/> | |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| | GPT-35-Turbo (8-shot)| 34.1 | - | 23.8 | 37.1 | 80.8 | - | 77.9 | 81.0 | | | | WizardMath-70B | 22.7 | 23.0 | 23.9 | 23.9 | 81.6 | 81.4 | 81.1 | 81.1 | | MAmmoTH-70B | 21.1 | 18.0 | 20.0 | 20.8 | 72.4 | 72.6 | 72.6 | 72.6 | | MetaMath-70B | 26.6 | 25.9 | 26.3 | 26.5 | 82.3 | 82.3 | 82.0 | 82.0 | |**Xwin-Math-70B-V1.0** | - | - |**31.8**|**31.8**| - | - |**87.0**|**87.0**| | | | WizardMath-13B | 14.0 | 14.2 | 14.9 | 15.0 | 63.9 | 63.9 | 63.7 | 63.7 | | MAmmoTH-13B | 12.9 | 10.8 | 11.8 | 12.3 | 56.3 | 56.2 | 56.1 | 56.2 | | MetaMath-13B | 22.4 | 22.5 | 22.6 | 22.7 | 72.3 | 71.0 | 70.9 | 70.9 | |**Xwin-Math-13B-V1.0** | - | - | 21.6 | 21.7 | - | - | 76.2 | 76.2 | | | | WizardMath-7B | 10.7 | 10.3 | 10.9 | 10.9 | 54.9 | 55.2 | 55.0 | 55.0 | | MAmmoTH-7B | 10.4 | 8.6 | 9.1 | 9.6 | 50.5 | 50.2 | 50.2 | 50.2 | | MetaMath-7B | 19.8 | 19.6 | 19.9 | 20.1 | 66.5 | 66.6 | 66.6 | 66.6 | |**Xwin-Math-7B-V1.0** | - | - | 17.3 | 17.4 | - | - | 66.6 | 66.6 | ### Installation Before you start, please install the requirements. ```bash pip install -r requirements.txt ``` We tested our result using `python 3.8` and `cuda 11.8`. We recommend you use docker. ```bash docker run --gpus all -it --rm --ipc=host superbench/dev:cuda11.8 ``` ### Generate To generate the model's responses, you can use the `generate.py` script. Please be aware that generating responses is separate from verifying their correctness. After that, we will then check for their correctness. For the generation process, we use the Vicuna-v1.1 system prompt with chain-of-thought and format instruction. We also employ a greedy decoding strategy and set the maximum sequence length to 2048. ``` "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {instruction} Give your solution in detail. In the end, write your final answer in the format of 'The answer is: <ANSWER>.'. ASSISTANT:" ``` Here is an simple example to generate using [vLLM](https://docs.vllm.ai/en/latest/). ```bash cd eval python generate.py --dataset_path dataset/gsm8k.json --model_path path/to/your/model --tensor_parallel_size 4 ``` By default the results will be output to the `eval/response`, using the prompt `eval/prompt/xwin_math.json`. If you wish to change the output path or use a different prompt ```bash python generate.py --dataset_path dataset/gsm8k.json --model_path path/to/your/model --tensor_parallel_size 4 --output_path /your/path --prompt_path /your/path ``` We provide some datasets (in `eval/dataset`): - `gsm8k.json`: GSM8K. - `math.json`: MATH. - `combination.json`: A combination of many benchmarks, can evaluate the OOD capability of the model. If you wan't to use your own datasets, please format your dataset like this. ```jsonc [ { "question": "Janet\u2019s ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day with four. She sells the remainder at the farmers' market daily for $2 per fresh duck egg. How much in dollars does she make every day at the farmers' market?", "answer": "18", "type": "GSM8K", "subtype": "", "level": 0, }, // ... more data items ] ``` ### Evaluate To verify the accuracy of the answers after generation, you can use the `check.py script. Here is an simple example ```bash cd eval python eval.py /path/to/model/response ``` The result will be saved in `eval/evaluation` If you do not want to save the results or want to change the save path ```bash python eval.py --data_path /path/to/model/response --save_path /path/to/save --save_result True ``` Once you run the script, the terminal will display the output as a table. This table will show the number of instances for each benchmark and the corresponding accuracy. Here is a hypothetical example of what the output might look like: ||Type|Subtype|Level|Correct|Incorrect|Total|Accuracy| |---|---|---|---|---|---|---|---| |0|MAWPS|addsub|0|359|33|392|0.915816| |1|MAWPS|multiarith|0|586|14|600|0.976667| |...| ## Citation Please consider citing our work if you use the data or code in this repo. ``` @software{xwin-math, title = {Xwin-Math}, author = {Xwin-Math Team}, url = {https://github.com/Xwin-LM/Xwin-LM/Xwin-Math}, version = {pre-release}, year = {2023}, month = {11}, } ``` ## Acknowledgements Thanks to [Llama 2](https://ai.meta.com/llama/), [FastChat](https://github.com/lm-sys/FastChat), and [vLLM](https://github.com/vllm-project/vllm).
T3Q-LLM/T3Q-LLM2-sft1.5
T3Q-LLM
2024-05-24T09:37:37Z
30
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-24T08:51:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## Evaluation hf-causal-experimental (pretrained=T3Q-LLM/T3Q-LLM2-sft1.5,use_accelerate=true,trust_remote_code=true), limit: None, provide_description: False, num_fewshot: 0, batch_size: 8 | Task |Version| Metric |Value | |Stderr| |----------------|------:|--------|-----:|---|-----:| |kobest_boolq | 0|acc |0.9516|± |0.0057| | | |macro_f1|0.9515|± |0.0057| |kobest_copa | 0|acc |0.7760|± |0.0132| | | |macro_f1|0.7756|± |0.0132| |kobest_hellaswag| 0|acc |0.5060|± |0.0224| | | |acc_norm|0.5780|± |0.0221| | | |macro_f1|0.5033|± |0.0223| |kobest_sentineg | 0|acc |0.7128|± |0.0227| | | |macro_f1|0.6930|± |0.0239|
OusmaneAliCiss/0214778
OusmaneAliCiss
2024-05-24T09:37:15Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-24T09:37:15Z
--- license: apache-2.0 ---
lainshower/Llama2-13b-orca-ep1
lainshower
2024-05-24T09:36:20Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-24T08:34:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
alpcansoydas/iban_ocr
alpcansoydas
2024-05-24T09:25:30Z
49
0
transformers
[ "transformers", "safetensors", "vision-encoder-decoder", "image-text-to-text", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-05-24T09:25:00Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
JawadC/beaufort
JawadC
2024-05-24T09:24:01Z
1
1
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-22T18:15:15Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of Beaufort cheese widget: - text: A photo of BEAUFORT cheese in a bucket output: url: image_0.png - text: A photo of BEAUFORT cheese in a bucket output: url: image_1.png - text: A photo of BEAUFORT cheese in a bucket output: url: image_2.png - text: A photo of BEAUFORT cheese in a bucket output: url: image_3.png --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - JawadC/beaufort <Gallery /> ## Model description These are JawadC/beaufort LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of Beaufort cheese to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](JawadC/beaufort/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
YakirHasid/swin-tiny-patch4-window7-224-finetuned-eurosat
YakirHasid
2024-05-24T09:22:37Z
218
0
transformers
[ "transformers", "tensorboard", "safetensors", "swin", "image-classification", "generated_from_trainer", "base_model:microsoft/swin-tiny-patch4-window7-224", "base_model:finetune:microsoft/swin-tiny-patch4-window7-224", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-24T08:46:26Z
--- license: apache-2.0 base_model: microsoft/swin-tiny-patch4-window7-224 tags: - generated_from_trainer metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-finetuned-eurosat results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0859 - Accuracy: 0.9718 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.4789 | 0.9979 | 351 | 0.1450 | 0.9496 | | 0.3963 | 1.9986 | 703 | 0.0961 | 0.9678 | | 0.3134 | 2.9936 | 1053 | 0.0859 | 0.9718 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
Keerthanah2002/ppc
Keerthanah2002
2024-05-24T09:22:35Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-05-24T09:18:21Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### ppc Dreambooth model trained by Keerthanah2002 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
tsavage68/MedQA_L3_1000steps_1e6rate_05beta_CSFTDPO
tsavage68
2024-05-24T09:21:24Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "dpo", "generated_from_trainer", "conversational", "base_model:tsavage68/MedQA_L3_1000steps_1e6rate_SFT", "base_model:finetune:tsavage68/MedQA_L3_1000steps_1e6rate_SFT", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T23:37:16Z
--- license: llama3 base_model: tsavage68/MedQA_L3_1000steps_1e6rate_SFT tags: - trl - dpo - generated_from_trainer model-index: - name: MedQA_L3_1000steps_1e6rate_03beta_CSFTDPO results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MedQA_L3_1000steps_1e6rate_03beta_CSFTDPO This model is a fine-tuned version of [tsavage68/MedQA_L3_1000steps_1e6rate_SFT](https://huggingface.co/tsavage68/MedQA_L3_1000steps_1e6rate_SFT) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5473 - Rewards/chosen: 5.1238 - Rewards/rejected: 0.9227 - Rewards/accuracies: 0.8198 - Rewards/margins: 4.2011 - Logps/rejected: -32.0093 - Logps/chosen: -21.0808 - Logits/rejected: -1.0586 - Logits/chosen: -1.0567 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6909 | 0.0489 | 50 | 0.6059 | -0.4307 | -0.6542 | 0.7538 | 0.2235 | -35.1631 | -32.1898 | -0.7254 | -0.7246 | | 0.4343 | 0.0977 | 100 | 0.7202 | 6.9486 | 5.2431 | 0.6989 | 1.7054 | -23.3686 | -17.4314 | -0.7816 | -0.7804 | | 0.7011 | 0.1466 | 150 | 0.6146 | 3.7158 | 2.0629 | 0.7407 | 1.6528 | -29.7289 | -23.8970 | -0.8414 | -0.8404 | | 0.3318 | 0.1954 | 200 | 0.7133 | 3.7895 | 1.2854 | 0.7385 | 2.5041 | -31.2840 | -23.7495 | -0.8346 | -0.8329 | | 0.4681 | 0.2443 | 250 | 0.5702 | 4.4998 | 2.1458 | 0.7758 | 2.3541 | -29.5633 | -22.3288 | -0.8127 | -0.8116 | | 0.4446 | 0.2931 | 300 | 0.5104 | 4.3384 | 1.4734 | 0.8022 | 2.8651 | -30.9081 | -22.6517 | -0.9419 | -0.9402 | | 0.6618 | 0.3420 | 350 | 0.5375 | 4.1100 | 1.1267 | 0.7912 | 2.9833 | -31.6015 | -23.1084 | -1.0095 | -1.0077 | | 0.6507 | 0.3908 | 400 | 0.4901 | 4.9193 | 1.9906 | 0.8088 | 2.9288 | -29.8737 | -21.4898 | -1.0601 | -1.0586 | | 0.6922 | 0.4397 | 450 | 0.5171 | 4.9828 | 1.7479 | 0.8088 | 3.2350 | -30.3591 | -21.3628 | -1.0672 | -1.0656 | | 1.0069 | 0.4885 | 500 | 0.5208 | 5.1851 | 1.8633 | 0.8154 | 3.3218 | -30.1282 | -20.9583 | -1.0738 | -1.0722 | | 0.3449 | 0.5374 | 550 | 0.5287 | 4.7906 | 1.3304 | 0.8022 | 3.4602 | -31.1941 | -21.7474 | -1.0809 | -1.0794 | | 0.5353 | 0.5862 | 600 | 0.5222 | 4.7907 | 1.0268 | 0.8242 | 3.7639 | -31.8012 | -21.7471 | -1.0774 | -1.0757 | | 0.5161 | 0.6351 | 650 | 0.5463 | 5.2594 | 1.2353 | 0.8176 | 4.0241 | -31.3842 | -20.8097 | -1.0613 | -1.0595 | | 0.3686 | 0.6839 | 700 | 0.5430 | 5.0821 | 0.9881 | 0.8154 | 4.0939 | -31.8786 | -21.1644 | -1.0604 | -1.0585 | | 0.4533 | 0.7328 | 750 | 0.5497 | 5.2255 | 1.0741 | 0.8286 | 4.1513 | -31.7065 | -20.8775 | -1.0601 | -1.0582 | | 0.4364 | 0.7816 | 800 | 0.5480 | 5.1239 | 0.9444 | 0.8198 | 4.1795 | -31.9660 | -21.0807 | -1.0600 | -1.0581 | | 0.6738 | 0.8305 | 850 | 0.5512 | 5.1510 | 0.9491 | 0.8198 | 4.2019 | -31.9565 | -21.0265 | -1.0594 | -1.0575 | | 0.7741 | 0.8793 | 900 | 0.5493 | 5.1296 | 0.9313 | 0.8220 | 4.1983 | -31.9922 | -21.0693 | -1.0588 | -1.0569 | | 0.4633 | 0.9282 | 950 | 0.5498 | 5.1295 | 0.9317 | 0.8220 | 4.1978 | -31.9914 | -21.0696 | -1.0587 | -1.0568 | | 0.1659 | 0.9770 | 1000 | 0.5473 | 5.1238 | 0.9227 | 0.8198 | 4.2011 | -32.0093 | -21.0808 | -1.0586 | -1.0567 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.0.0+cu117 - Datasets 2.19.1 - Tokenizers 0.19.1
lainshower/Llama2-7b-orca-ep2
lainshower
2024-05-24T09:20:15Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-24T08:34:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ylacombe/test-mha-large-untrained-v2
ylacombe
2024-05-24T09:17:51Z
8
0
transformers
[ "transformers", "safetensors", "parler_tts", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-24T09:15:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lainshower/Llama2-7b-orca-ep1
lainshower
2024-05-24T09:15:40Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-24T08:33:48Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
QuantFactory/aya-23-8B-GGUF
QuantFactory
2024-05-24T09:15:12Z
222
5
transformers
[ "transformers", "gguf", "cohere", "text-generation", "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar", "el", "fa", "pl", "id", "cs", "he", "hi", "nl", "ro", "ru", "tr", "uk", "vi", "base_model:CohereForAI/aya-23-8B", "base_model:quantized:CohereForAI/aya-23-8B", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-05-24T00:51:59Z
--- library_name: transformers base_model: CohereForAI/aya-23-8B language: - en - fr - de - es - it - pt - ja - ko - zh - ar - el - fa - pl - id - cs - he - hi - nl - ro - ru - tr - uk - vi license: cc-by-nc-4.0 pipeline_tag: text-generation tags: - cohere --- # Aya-23-8B-GGUF - This is quantized version of [CohereForAI/aya-23-8B](https://huggingface.co/CohereForAI/aya-23-8B) created using llama.cpp # Model Description Aya 23 is an open weights research release of an instruction fine-tuned model with highly advanced multilingual capabilities. Aya 23 focuses on pairing a highly performant pre-trained [Command family](https://huggingface.co/CohereForAI/c4ai-command-r-plus) of models with the recently released [Aya Collection](https://huggingface.co/datasets/CohereForAI/aya_collection). The result is a powerful multilingual large language model serving 23 languages. This model card corresponds to the 8-billion version of the Aya 23 model. We also released a 35-billion version which you can find [here](https://huggingface.co/CohereForAI/aya-23-35B). We cover 23 languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese Developed by: [Cohere For AI](https://cohere.for.ai) and [Cohere](https://cohere.com/) - Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/) - License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy) - Model: aya-23-8B - Model Size: 8 billion parameters **Try Aya 23** You can try out Aya 23 (35B) before downloading the weights in our hosted Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23). ### Usage Please install transformers from the source repository that includes the necessary changes for this model ```python # pip install transformers==4.41.1 from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "CohereForAI/aya-23-8B" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) # Format message with the command-r-plus chat template messages = [{"role": "user", "content": "Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz"}] input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt") ## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> gen_tokens = model.generate( input_ids, max_new_tokens=100, do_sample=True, temperature=0.3, ) gen_text = tokenizer.decode(gen_tokens[0]) print(gen_text) ``` ### Example Notebook [This notebook](https://huggingface.co/CohereForAI/aya-23-8B/blob/main/Aya_23_notebook.ipynb) showcases a detailed use of Aya 23 (8B) including inference and fine-tuning with [QLoRA](https://huggingface.co/blog/4bit-transformers-bitsandbytes). ## Model Details **Input**: Models input text only. **Output**: Models generate text only. **Model Architecture**: Aya-23-8B is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model is fine-tuned (IFT) to follow human instructions. **Languages covered**: The model is particularly optimized for multilinguality and supports the following languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese **Context length**: 8192 ### Evaluation <img src="benchmarks.png" alt="multilingual benchmarks" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/> <img src="winrates.png" alt="average win rates" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/> Please refer to the [Aya 23 technical report](https://cohere.com/research/papers/aya-command-23-8b-and-35b-technical-report-2024-05-23) for further details about the base model, data, instruction tuning, and evaluation. ### Terms of Use We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant multilingual model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).
ylacombe/test-gqa-large-untrained-v2
ylacombe
2024-05-24T09:14:53Z
5
0
transformers
[ "transformers", "safetensors", "parler_tts", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-24T09:12:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ananasa/lora_phi1_1epochs
ananasa
2024-05-24T09:02:08Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "base_model:finetune:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-24T09:02:02Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit --- # Uploaded model - **Developed by:** ananasa - **License:** apache-2.0 - **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
arjun3103/llama3-4bit-sql-finetune
arjun3103
2024-05-24T08:58:35Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-24T08:58:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Mouwiya/BLIP_image_captioning
Mouwiya
2024-05-24T08:47:56Z
105
1
transformers
[ "transformers", "safetensors", "blip", "image-text-to-text", "image-to-text", "dataset:Mouwiya/image-in-Words400", "endpoints_compatible", "region:us" ]
image-to-text
2024-05-20T06:08:30Z
--- library_name: transformers pipeline_tag: image-to-text datasets: - Mouwiya/image-in-Words400 --- # BLIP Image Captioning ## Model Description BLIP_image_captioning is a model based on the BLIP (Bootstrapping Language-Image Pre-training) architecture, specifically designed for image captioning tasks. The model has been fine-tuned on the "image-in-words400" dataset, which consists of images and their corresponding descriptive captions. This model leverages both visual and textual data to generate accurate and contextually relevant captions for images. ## Model Details - **Model Architecture**: BLIP (Bootstrapping Language-Image Pre-training) - **Base Model**: Salesforce/blip-image-captioning-base - **Fine-tuning Dataset**: mouwiya/image-in-words400 - **Number of Parameters**: 109 million ## Training Data The model was fine-tuned on a shuffled and subsetted version of the **"image-in-words400"** dataset. A total of 400 examples were used during the fine-tuning process to allow for faster iteration and development. ## Training Procedure - **Optimizer**: AdamW - **Learning Rate**: 2e-5 - **Batch Size**: 16 - **Epochs**: 3 - **Evaluation Metric**: BLEU Score ## Usage To use this model for image captioning, you can load it using the Hugging Face transformers library and perform inference as shown below: ```python from transformers import BlipProcessor, BlipForConditionalGeneration from PIL import Image import requests from io import BytesIO # Load the processor and model model_name = "Mouwiya/BLIP_image_captioning" processor = BlipProcessor.from_pretrained(model_name) model = BlipForConditionalGeneration.from_pretrained(model_name) # Example usage image_url = "URL_OF_THE_IMAGE" response = requests.get(image_url) image = Image.open(BytesIO(response.content)).convert("RGB") inputs = processor(images=image, return_tensors="pt") outputs = model.generate(**inputs) caption = processor.decode(outputs[0], skip_special_tokens=True) print(caption) ``` ## Evaluation The model was evaluated on a subset of the "image-in-words400" dataset using the BLEU score. The evaluation results are as follows: - **Average BLEU Score**: 0.35 This score indicates the model's ability to generate captions that closely match the reference descriptions in terms of overlapping n-grams. ## Limitations - **Dataset Size**: The model was fine-tuned on a relatively small subset of the dataset, which may limit its generalization capabilities. - **Domain-Specific**: This model was trained on a specific dataset and may not perform as well on images from different domains. ## Contact **Mouwiya S. A. Al-Qaisieh** [email protected]
eliaslundgren/Mistral-7B-Swedish-Distractor-Generator
eliaslundgren
2024-05-24T08:36:00Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T16:46:03Z
--- license: apache-2.0 ---
casual/nlp_til3
casual
2024-05-24T08:31:04Z
111
0
transformers
[ "transformers", "tensorboard", "safetensors", "mobilebert", "token-classification", "generated_from_trainer", "base_model:google/mobilebert-uncased", "base_model:finetune:google/mobilebert-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-05-17T02:54:13Z
--- license: apache-2.0 base_model: google/mobilebert-uncased tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: nlp_til3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nlp_til3 This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 - Precision: 1.0 - Recall: 1.0 - F1: 1.0 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:| | No log | 1.0 | 219 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 | | No log | 2.0 | 438 | 0.0000 | 1.0 | 1.0 | 1.0 | 1.0 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.0.1+cu117 - Datasets 2.19.1 - Tokenizers 0.19.1
venetis/llama3-8b-hermes-sandals-100
venetis
2024-05-24T08:25:25Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:adapter:meta-llama/Meta-Llama-3-8B", "license:llama3", "region:us" ]
null
2024-05-24T06:53:49Z
--- license: llama3 library_name: peft tags: - axolotl - generated_from_trainer base_model: meta-llama/Meta-Llama-3-8B model-index: - name: llama3-8b-hermes-sandals-100 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: meta-llama/Meta-Llama-3-8B model_type: LlamaForCausalLM tokenizer_type: AutoTokenizer load_in_8bit: false load_in_4bit: false strict: false datasets: - path: ./data/openhermes2_5_first_100.jsonl type: sharegpt conversation: llama3 dataset_prepared_path: val_set_size: 0.15 output_dir: ./outputs_lora-out hub_model_id: venetis/llama3-8b-hermes-sandals-100 chat_template: llama3 adapter: lora lora_model_dir: lora_r: 32 lora_alpha: 16 lora_dropout: 0.05 lora_target_linear: true lora_fan_in_fan_out: sequence_len: 4096 sample_packing: true eval_sample_packing: false pad_to_sequence_len: true wandb_project: llama-3-8b-hermes-sandals-first100 wandb_entity: venetispall gradient_accumulation_steps: 4 micro_batch_size: 2 num_epochs: 4 optimizer: adamw_bnb_8bit lr_scheduler: cosine learning_rate: 2e-4 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true s2_attention: warmup_steps: 10 evals_per_epoch: 4 eval_table_size: eval_max_new_tokens: 128 saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens: pad_token: <|end_of_text|> ``` </details><br> # llama3-8b-hermes-sandals-100 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9990 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.6949 | 1.0 | 1 | 1.0314 | | 0.3183 | 1.3333 | 2 | 1.0297 | | 0.7635 | 2.0 | 3 | 1.0211 | | 1.0254 | 2.6667 | 4 | 0.9990 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.2 - Pytorch 2.1.2+cu118 - Datasets 2.19.1 - Tokenizers 0.19.1
DavidPL1/a2c-PandaReachDense-v3
DavidPL1
2024-05-24T08:22:12Z
1
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-05-24T08:19:42Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v3 type: PandaReachDense-v3 metrics: - type: mean_reward value: -0.26 +/- 0.14 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v3** This is a trained model of a **A2C** agent playing **PandaReachDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
FaceAdapter/FaceAdapter
FaceAdapter
2024-05-24T08:19:57Z
0
10
diffusers
[ "diffusers", "onnx", "safetensors", "arxiv:2405.12970", "license:apache-2.0", "region:us" ]
null
2024-05-22T12:17:53Z
--- license: apache-2.0 --- # FaceAdapter Model Card <div align="center"> [**Project Page**](https://faceadapter.github.io/face-adapter.github.io/) **|** [**Paper**](https://arxiv.org/pdf/2405.12970) **|** [**Code**](https://github.com/FaceAdapter/Face-Adapter) **|** [🤗 **Gradio demo**](https://huggingface.co/spaces/FaceAdapter/FaceAdapter) </div> ## Introduction Face-Adapter is an efficient and effective face editing adapter for pre-trained diffusion models, specifically targeting face reenactment and swapping tasks. <div align="center"> <img src='__assets__/banner.gif'> </div> ## Usage You can directly download the model in this repository or download in python script: ```python # Download a specific file from huggingface_hub import hf_hub_download hf_hub_download(repo_id="FaceAdapter/FaceAdapter", filename="controlnet/config.json", local_dir="./checkpoints") # Download all files from huggingface_hub import snapshot_download snapshot_download(repo_id="FaceAdapter/FaceAdapter", local_dir="./checkpoints") ```
QuantFactory/NeuralDaredevil-7B-GGUF
QuantFactory
2024-05-24T08:19:32Z
102
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "dpo", "rlhf", "mlabonne/example", "text-generation", "base_model:mlabonne/NeuralDaredevil-7B", "base_model:quantized:mlabonne/NeuralDaredevil-7B", "license:cc-by-nc-4.0", "model-index", "endpoints_compatible", "region:us" ]
text-generation
2024-04-20T18:04:25Z
--- license: cc-by-nc-4.0 tags: - merge - mergekit - lazymergekit - dpo - rlhf - mlabonne/example base_model: mlabonne/NeuralDaredevil-7B model-index: - name: NeuralDaredevil-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 69.88 name: normalized accuracy source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralDaredevil-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 87.62 name: normalized accuracy source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralDaredevil-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 65.12 name: accuracy source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralDaredevil-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 66.85 source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralDaredevil-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 82.08 name: accuracy source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralDaredevil-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 73.16 name: accuracy source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralDaredevil-7B name: Open LLM Leaderboard library_name: transformers pipeline_tag: text-generation --- # NeuralDaredevil-7B-GGUF - This is quantized version of [mlabonne/NeuralDaredevil-7B](https://huggingface.co/mlabonne/NeuralDaredevil-7B) created using llama.cpp ![](https://i.imgur.com/D80Ua7T.png) # Model Description NeuralDaredevil-7B is a DPO fine-tune of [mlabonne/Daredevil-7B](https://huggingface.co/mlabonne/Daredevil-7B) using the [argilla/distilabel-intel-orca-dpo-pairs](https://huggingface.co/datasets/argilla/distilabel-intel-orca-dpo-pairs) preference dataset and my DPO notebook from [this article](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac). Thanks [Argilla](https://huggingface.co/argilla) for providing the dataset and the training recipe [here](https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp). 💪 ## 🏆 Evaluation ### Nous The evaluation was performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval) on Nous suite. | Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench | |---|---:|---:|---:|---:|---:| | [**mlabonne/NeuralDaredevil-7B**](https://huggingface.co/mlabonne/NeuralDaredevil-7B) [📄](https://gist.github.com/mlabonne/cbeb077d1df71cb81c78f742f19f4155) | **59.39** | **45.23** | **76.2** | **67.61** | **48.52** | | [mlabonne/Beagle14-7B](https://huggingface.co/mlabonne/Beagle14-7B) [📄](https://gist.github.com/mlabonne/f5a5bf8c0827bbec2f05b97cc62d642c) | 59.4 | 44.38 | 76.53 | 69.44 | 47.25 | | [argilla/distilabeled-Marcoro14-7B-slerp](https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp) [📄](https://gist.github.com/mlabonne/9082c4e59f4d3f3543c5eda3f4807040) | 58.93 | 45.38 | 76.48 | 65.68 | 48.18 | | [mlabonne/NeuralMarcoro14-7B](https://huggingface.co/mlabonne/NeuralMarcoro14-7B) [📄](https://gist.github.com/mlabonne/b31572a4711c945a4827e7242cfc4b9d) | 58.4 | 44.59 | 76.17 | 65.94 | 46.9 | | [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106) [📄](https://gist.github.com/mlabonne/1afab87b543b0717ec08722cf086dcc3) | 53.71 | 44.17 | 73.72 | 52.53 | 44.4 | | [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) [📄](https://gist.github.com/mlabonne/88b21dd9698ffed75d6163ebdc2f6cc8) | 52.42 | 42.75 | 72.99 | 52.99 | 40.94 | You can find the complete benchmark on [YALL - Yet Another LLM Leaderboard](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard). # [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mlabonne__NeuralDaredevil-7B) | Metric |Value| |---------------------------------|----:| |Avg. |74.12| |AI2 Reasoning Challenge (25-Shot)|69.88| |HellaSwag (10-Shot) |87.62| |MMLU (5-Shot) |65.12| |TruthfulQA (0-shot) |66.85| |Winogrande (5-shot) |82.08| |GSM8k (5-shot) |73.16| ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "mlabonne/NeuralDaredevil-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` <p align="center"> <a href="https://github.com/argilla-io/distilabel"> <img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/> </a> </p>
malerbe/ppo-Huggy
malerbe
2024-05-24T08:10:26Z
3
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2024-05-24T08:10:21Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: malerbe/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
InferenceIllusionist/dolphin-2.9.1-mixtral-1x22b-iMat-GGUF
InferenceIllusionist
2024-05-24T08:06:32Z
193
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "axolotl", "iMat", "en", "dataset:cognitivecomputations/Dolphin-2.9", "dataset:teknium/OpenHermes-2.5", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "dataset:cognitivecomputations/dolphin-coder", "dataset:cognitivecomputations/samantha-data", "dataset:microsoft/orca-math-word-problems-200k", "dataset:abacusai/SystemChat-1.1", "dataset:Locutusque/function-calling-chatml", "dataset:internlm/Agent-FLAN", "base_model:cognitivecomputations/dolphin-2.9.1-mixtral-1x22b", "base_model:quantized:cognitivecomputations/dolphin-2.9.1-mixtral-1x22b", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-05-23T23:26:30Z
--- base_model: cognitivecomputations/dolphin-2.9.1-mixtral-1x22b datasets: - cognitivecomputations/Dolphin-2.9 - teknium/OpenHermes-2.5 - m-a-p/CodeFeedback-Filtered-Instruction - cognitivecomputations/dolphin-coder - cognitivecomputations/samantha-data - microsoft/orca-math-word-problems-200k - abacusai/SystemChat-1.1 - Locutusque/function-calling-chatml - internlm/Agent-FLAN language: - en library_name: transformers license: apache-2.0 quantized_by: InferenceIllusionist tags: - generated_from_trainer - axolotl - iMat --- <img src="https://i.imgur.com/P68dXux.png" width="400"/> # dolphin-2.9.1-mixtral-1x22b-iMat-GGUF Quantized from fp16. * Weighted quantizations were creating using fp16 GGUF and [groups_merged-enhancedV2-TurboMini.txt](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-9432658) in 228 chunks and n_ctx=512 * This method of calculating the importance matrix showed improvements in some areas for Mistral 7b and Llama3 8b models, see above post for details * The enhancedv2-turbomini file appends snippets from turboderp's calibration data to the standard groups_merged.txt file * Repetition penalty 1.05-1.18 has worked well for these quants. For a brief rundown of iMatrix quant performance please see this [PR](https://github.com/ggerganov/llama.cpp/pull/5747) <i>All quants are verified working prior to uploading to repo for your safety and convenience. </i> <b>Tip:</b> Pick a file size under your GPU's VRAM while still allowing some room for context for best speed. You may need to pad this further depending on if you are running image gen or TTS as well. Original model card can be found [here](https://huggingface.co/cognitivecomputations/dolphin-2.9.1-mixtral-1x22b)
yaronedri01/q-Taxi-v3
yaronedri01
2024-05-24T08:04:20Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-05-24T07:58:16Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="yaronedri01/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
MuntasirHossain/Orpo-Mistral-7B-v0.3-peft-adapter
MuntasirHossain
2024-05-24T08:03:40Z
4
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "orpo", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.3", "base_model:adapter:mistralai/Mistral-7B-v0.3", "license:apache-2.0", "region:us" ]
null
2024-05-24T08:02:57Z
--- license: apache-2.0 library_name: peft tags: - trl - orpo - generated_from_trainer base_model: mistralai/Mistral-7B-v0.3 model-index: - name: Orpo-Mistral-7B-v0.3-peft-adapter results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Orpo-Mistral-7B-v0.3-peft-adapter This model is a fine-tuned version of [mistralai/Mistral-7B-v0.3](https://huggingface.co/mistralai/Mistral-7B-v0.3) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6129 - Rewards/chosen: -0.0574 - Rewards/rejected: -0.0794 - Rewards/accuracies: 0.6429 - Rewards/margins: 0.0221 - Logps/rejected: -0.7943 - Logps/chosen: -0.5738 - Logits/rejected: -3.2681 - Logits/chosen: -3.2859 - Nll Loss: 0.5465 - Log Odds Ratio: -0.5869 - Log Odds Chosen: 0.4083 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-06 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Nll Loss | Log Odds Ratio | Log Odds Chosen | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|:--------:|:--------------:|:---------------:| | 0.7914 | 0.2003 | 62 | 0.6937 | -0.0709 | -0.0851 | 0.6071 | 0.0142 | -0.8514 | -0.7090 | -3.2582 | -3.2848 | 0.6333 | -0.6178 | 0.2402 | | 0.802 | 0.4006 | 124 | 0.6338 | -0.0607 | -0.0781 | 0.6429 | 0.0174 | -0.7809 | -0.6070 | -3.2744 | -3.2972 | 0.5693 | -0.5966 | 0.3287 | | 0.8605 | 0.6010 | 186 | 0.6204 | -0.0586 | -0.0799 | 0.6071 | 0.0213 | -0.7990 | -0.5863 | -3.2692 | -3.2895 | 0.5538 | -0.5899 | 0.3927 | | 0.7359 | 0.8013 | 248 | 0.6129 | -0.0574 | -0.0794 | 0.6429 | 0.0221 | -0.7943 | -0.5738 | -3.2681 | -3.2859 | 0.5465 | -0.5869 | 0.4083 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
Zoyd/OpenLLM-Ro_RoMistral-7b-Instruct-5_0bpw_exl2
Zoyd
2024-05-24T08:00:14Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "ro", "arxiv:2405.07703", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "5-bit", "exl2", "region:us" ]
text-generation
2024-05-24T07:56:15Z
--- license: cc-by-nc-4.0 language: - ro --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> RoMistral is a family of pretrained and fine-tuned generative text models for Romanian. This is the repository for the **instruct 7B model**. Links to other models can be found at the bottom of this page. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> OpenLLM-Ro represents the first open-source effort to build a LLM specialized for Romanian. OpenLLM-Ro developed and publicly releases a collection of Romanian LLMs, both in the form of foundational model and instruct and chat variants. - **Developed by:** OpenLLM-Ro <!-- - **Funded by [optional]:** [More Information Needed] --> <!-- - **Shared by [optional]:** [More Information Needed] --> <!-- - **Model type:** [More Information Needed] --> - **Language(s):** Romanian - **License:** cc-by-nc-4.0 - **Finetuned from model:** [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) <!-- - **Finetuned from model [optional]:** [More Information Needed] --> ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/OpenLLM-Ro/llama-recipes - **Paper:** https://arxiv.org/abs/2405.07703 ## Intended Use ### Intended Use Cases RoMistral is intented for research use in Romanian. Base models can be adapted for a variety of natural language tasks while instruction and chat tuned models are intended for assistant-like chat. ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> Use in any manner that violates the license, any applicable laws or regluations, use in languages other than Romanian. ## How to Get Started with the Model Use the code below to get started with the model. ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("OpenLLM-Ro/RoMistral-7b-Instruct") model = AutoModelForCausalLM.from_pretrained("OpenLLM-Ro/RoMistral-7b-Instruct") instruction = "Ce jocuri de societate pot juca cu prietenii mei?" chat = [ {"role": "user", "content": instruction}, ] prompt = tokenizer.apply_chat_template(chat, tokenize=False, system_message="") inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt") outputs = model.generate(input_ids=inputs, max_new_tokens=128) print(tokenizer.decode(outputs[0])) ``` ## Benchmarks | Model | Average | ARC | MMLU |Winogrande|HellaSwag | GSM8k |TruthfulQA| |--------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:| | Mistral-7B-Instruct-v0.2| 45.63 | 43.09 | 44.87 | 59.26 | 54.12 | 10.86 | **61.56** | | *RoMistral-7b-Instruct* | ***52.49*** | ***50.39*** | ***51.64*** | ***66.69*** | ***60.24*** | ***33.71*** | 52.59 | ## MT-Bench | Model | Average | 1st turn | 2nd turn | |--------------------|:--------:|:--------:|:--------:| | Mistral-7B-Instruct-v0.2 | 5.84 | 6.06 | **5.63** | | *RoMistral-7b-Instruct*| ***5.92***|***6.53***| *5.415* | ## RoMistral Model Family | Model | Link | |--------------------|:--------:| |*RoMistral-7b-Instruct*| [link](https://huggingface.co/OpenLLM-Ro/RoMistral-7b-Instruct) | <!-- ## Citation If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. **BibTeX:** [More Information Needed] **APA:** [More Information Needed] -->
nazmalaz/poca-SoccerTwos
nazmalaz
2024-05-24T07:57:24Z
6
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "ML-Agents-SoccerTwos", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "region:us" ]
reinforcement-learning
2024-05-22T12:12:48Z
--- library_name: ml-agents tags: - ML-Agents-SoccerTwos - SoccerTwos - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: nazmalaz/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
mastikaui/NLP-Sentiment-Analysis-Airline-Tweets-with-BERT-V2
mastikaui
2024-05-24T07:44:17Z
113
1
transformers
[ "transformers", "safetensors", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-17T03:59:39Z
#NLP-Sentiment-Analysis-Airline-Tweets-with-BERT-V2 This repository features sentiment analysis projects that leverage BERT, a leading NLP model. This project involves pre-processing, tokenization, and BERT customization for airline tweet sentiment classification. The tasks in this model use the original model "BERT base model (no casing)", uses a data set: https://www.kaggle.com/datasets/crowdflower/twitter-airline-sentiment, and there are several stages in achieving results, below are the evaluation sets Accuracy: 0.8203551912568307 Colab notebook for improvements: https://colab.research.google.com/drive/1IQen2iNXkjOgdzjyi7PQyLFqHyqHTF3A?usp=sharing #classification report for more detailed evaluation : | | precision | recall | f1-score | support | |-----------|-----------|--------|----------|---------| | negative | 0.88 | 0.90 | 0.89 | 959 | | neutral | 0.68 | 0.58 | 0.62 | 293 | | positive | 0.72 | 0.81 | 0.76 | 212 | |-----------|-----------|--------|----------|---------| | accuracy | | | 0.82 | 1464 | | macro avg | 0.76 | 0.76 | 0.76 | 1464 | | weighted avg | 0.82 | 0.82 | 0.82 | 1464 | The sentiment classification model achieved a promising overall accuracy of 82.04%, built on BertForSequenceClassifi- cation and trained for 10 epochs using AdamW optimization. The model exhibited stable performance, with validation ac- curacy consistently between 0.79 to 0.81, indicating effective learning. Additionally, it showed high precision, particularly for negative sentiment (0.88), along with moderate scores for neutral (0.68) and positive (0.72) sentiments. These results were supported by recall and F1-score metrics, providing a comprehensive understanding of performance across sentiment classes. The analysis of the confusion matrix revealed strong alignment between model predictions and actual labels, al- beit with opportunities for improvement, such as addressing overfitting or parameter adjustment, evident from performance fluctuations across epochs. Developed by:Mastika
Zoyd/OpenLLM-Ro_RoMistral-7b-Instruct-4_0bpw_exl2
Zoyd
2024-05-24T07:44:01Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "ro", "arxiv:2405.07703", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "exl2", "region:us" ]
text-generation
2024-05-24T07:40:45Z
--- license: cc-by-nc-4.0 language: - ro --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> RoMistral is a family of pretrained and fine-tuned generative text models for Romanian. This is the repository for the **instruct 7B model**. Links to other models can be found at the bottom of this page. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> OpenLLM-Ro represents the first open-source effort to build a LLM specialized for Romanian. OpenLLM-Ro developed and publicly releases a collection of Romanian LLMs, both in the form of foundational model and instruct and chat variants. - **Developed by:** OpenLLM-Ro <!-- - **Funded by [optional]:** [More Information Needed] --> <!-- - **Shared by [optional]:** [More Information Needed] --> <!-- - **Model type:** [More Information Needed] --> - **Language(s):** Romanian - **License:** cc-by-nc-4.0 - **Finetuned from model:** [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) <!-- - **Finetuned from model [optional]:** [More Information Needed] --> ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/OpenLLM-Ro/llama-recipes - **Paper:** https://arxiv.org/abs/2405.07703 ## Intended Use ### Intended Use Cases RoMistral is intented for research use in Romanian. Base models can be adapted for a variety of natural language tasks while instruction and chat tuned models are intended for assistant-like chat. ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> Use in any manner that violates the license, any applicable laws or regluations, use in languages other than Romanian. ## How to Get Started with the Model Use the code below to get started with the model. ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("OpenLLM-Ro/RoMistral-7b-Instruct") model = AutoModelForCausalLM.from_pretrained("OpenLLM-Ro/RoMistral-7b-Instruct") instruction = "Ce jocuri de societate pot juca cu prietenii mei?" chat = [ {"role": "user", "content": instruction}, ] prompt = tokenizer.apply_chat_template(chat, tokenize=False, system_message="") inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt") outputs = model.generate(input_ids=inputs, max_new_tokens=128) print(tokenizer.decode(outputs[0])) ``` ## Benchmarks | Model | Average | ARC | MMLU |Winogrande|HellaSwag | GSM8k |TruthfulQA| |--------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:| | Mistral-7B-Instruct-v0.2| 45.63 | 43.09 | 44.87 | 59.26 | 54.12 | 10.86 | **61.56** | | *RoMistral-7b-Instruct* | ***52.49*** | ***50.39*** | ***51.64*** | ***66.69*** | ***60.24*** | ***33.71*** | 52.59 | ## MT-Bench | Model | Average | 1st turn | 2nd turn | |--------------------|:--------:|:--------:|:--------:| | Mistral-7B-Instruct-v0.2 | 5.84 | 6.06 | **5.63** | | *RoMistral-7b-Instruct*| ***5.92***|***6.53***| *5.415* | ## RoMistral Model Family | Model | Link | |--------------------|:--------:| |*RoMistral-7b-Instruct*| [link](https://huggingface.co/OpenLLM-Ro/RoMistral-7b-Instruct) | <!-- ## Citation If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. **BibTeX:** [More Information Needed] **APA:** [More Information Needed] -->
gagan3012/tinyllama-20480
gagan3012
2024-05-24T07:40:24Z
105
0
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/tinyllama-chat-bnb-4bit", "base_model:finetune:unsloth/tinyllama-chat-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-24T07:38:20Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft base_model: unsloth/tinyllama-chat-bnb-4bit --- # Uploaded model - **Developed by:** gagan3012 - **License:** apache-2.0 - **Finetuned from model :** unsloth/tinyllama-chat-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
juan-mathew/distilbert-base-uncased
juan-mathew
2024-05-24T07:35:24Z
120
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-23T09:09:45Z
--- tags: - generated_from_trainer metrics: - matthews_correlation model-index: - name: distilbert-base-uncased results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8081 - Matthews Correlation: 0.5533 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5224 | 1.0 | 535 | 0.4595 | 0.4332 | | 0.3513 | 2.0 | 1070 | 0.4861 | 0.5317 | | 0.224 | 3.0 | 1605 | 0.6453 | 0.5187 | | 0.1721 | 4.0 | 2140 | 0.7943 | 0.5286 | | 0.1254 | 5.0 | 2675 | 0.8081 | 0.5533 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.3.0 - Datasets 2.19.1 - Tokenizers 0.19.1
vwxyzjn/ppo_tldr
vwxyzjn
2024-05-24T07:34:39Z
154
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-generation", "generated_from_trainer", "conversational", "base_model:cleanrl/EleutherAI_pythia-1b-deduped__sft__tldr", "base_model:finetune:cleanrl/EleutherAI_pythia-1b-deduped__sft__tldr", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-08T02:28:00Z
--- base_model: cleanrl/EleutherAI_pythia-1b-deduped__sft__tldr tags: - generated_from_trainer model-index: - name: ppo_tldr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ppo_tldr This model is a fine-tuned version of [cleanrl/EleutherAI_pythia-1b-deduped__sft__tldr](https://huggingface.co/cleanrl/EleutherAI_pythia-1b-deduped__sft__tldr) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-06 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 512 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.19.1
yihanwang617/tinyllama-sft-vicuna-full-no-completion-mask
yihanwang617
2024-05-24T07:31:19Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "alignment-handbook", "trl", "sft", "generated_from_trainer", "conversational", "dataset:yihanwang617/vicuna_cleaned", "base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T", "base_model:finetune:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-24T05:47:19Z
--- license: apache-2.0 base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T tags: - alignment-handbook - trl - sft - generated_from_trainer - trl - sft - generated_from_trainer datasets: - yihanwang617/vicuna_cleaned model-index: - name: tinyllama-sft-vicuna-full-no-completion-mask results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tinyllama-sft-vicuna-full-no-completion-mask This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) on the yihanwang617/vicuna_cleaned dataset. It achieves the following results on the evaluation set: - Loss: 0.8864 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.9086 | 1.0 | 732 | 0.8864 | ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.14.6 - Tokenizers 0.15.0
LaLegumbreArtificial/xlm-roberta-base-finetuned-panx-de
LaLegumbreArtificial
2024-05-24T07:30:08Z
137
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-05-24T06:50:31Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8629522349065712 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1352 - F1: 0.8630 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2561 | 1.0 | 525 | 0.1654 | 0.8268 | | 0.128 | 2.0 | 1050 | 0.1401 | 0.8528 | | 0.0819 | 3.0 | 1575 | 0.1352 | 0.8630 | ### Framework versions - Transformers 4.16.2 - Pytorch 2.3.0 - Datasets 1.16.1 - Tokenizers 0.19.1
bacen/results
bacen
2024-05-24T07:26:44Z
162
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "base_model:facebook/wav2vec2-base-960h", "base_model:finetune:facebook/wav2vec2-base-960h", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-24T07:25:58Z
--- license: apache-2.0 base_model: facebook/wav2vec2-base-960h tags: - generated_from_trainer model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
Shrimalip/m2m100_418M-finetuned-hi-to-en
Shrimalip
2024-05-24T07:26:18Z
10
0
transformers
[ "transformers", "safetensors", "m2m_100", "text2text-generation", "generated_from_trainer", "base_model:facebook/m2m100_418M", "base_model:finetune:facebook/m2m100_418M", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-23T16:30:06Z
--- license: mit base_model: facebook/m2m100_418M tags: - generated_from_trainer metrics: - bleu model-index: - name: m2m100_418M-finetuned-hi-to-en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # m2m100_418M-finetuned-hi-to-en This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1973 - Bleu: 0.0 - Gen Len: 5.7184 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-------:|:-----:|:---------------:|:-------:|:-------:| | 2.6398 | 0.1100 | 500 | 2.5624 | 2.434 | 5.8204 | | 2.6877 | 0.2199 | 1000 | 2.4067 | 6.9764 | 5.6658 | | 2.6 | 0.3299 | 1500 | 2.3000 | 4.9574 | 5.6818 | | 2.5495 | 0.4399 | 2000 | 2.2093 | 13.5783 | 5.7773 | | 2.4986 | 0.5498 | 2500 | 2.1232 | 12.0884 | 5.7156 | | 2.4475 | 0.6598 | 3000 | 2.0526 | 0.0 | 5.7829 | | 2.418 | 0.7697 | 3500 | 1.9804 | 0.0 | 5.7902 | | 2.3652 | 0.8797 | 4000 | 1.9253 | 0.0 | 5.7564 | | 2.3625 | 0.9897 | 4500 | 1.8681 | 0.0 | 5.7984 | | 2.024 | 1.0996 | 5000 | 1.8020 | 0.0 | 5.81 | | 2.0017 | 1.2096 | 5500 | 1.7601 | 0.0 | 5.7493 | | 2.0036 | 1.3196 | 6000 | 1.7208 | 0.0 | 5.8507 | | 1.9983 | 1.4295 | 6500 | 1.6662 | 0.0 | 5.742 | | 1.9838 | 1.5395 | 7000 | 1.6273 | 0.0 | 5.8033 | | 1.9755 | 1.6494 | 7500 | 1.5914 | 0.0 | 5.8629 | | 1.9679 | 1.7594 | 8000 | 1.5436 | 0.0 | 5.8751 | | 1.9386 | 1.8694 | 8500 | 1.5154 | 0.0 | 5.8762 | | 1.9299 | 1.9793 | 9000 | 1.4725 | 0.0 | 5.82 | | 1.6886 | 2.0893 | 9500 | 1.4242 | 0.0 | 5.7729 | | 1.6454 | 2.1993 | 10000 | 1.3867 | 0.0 | 5.7042 | | 1.6361 | 2.3092 | 10500 | 1.3544 | 0.0 | 5.6789 | | 1.6482 | 2.4192 | 11000 | 1.3346 | 0.0 | 5.7051 | | 1.6528 | 2.5291 | 11500 | 1.3043 | 0.0 | 5.7147 | | 1.6687 | 2.6391 | 12000 | 1.2718 | 0.0 | 5.7633 | | 1.6428 | 2.7491 | 12500 | 1.2417 | 0.0 | 5.7318 | | 1.6547 | 2.8590 | 13000 | 1.2086 | 0.0 | 5.7536 | | 1.6467 | 2.9690 | 13500 | 1.1895 | 0.0 | 5.7458 | | 1.4526 | 3.0790 | 14000 | 1.1425 | 0.0 | 5.7869 | | 1.3555 | 3.1889 | 14500 | 1.1204 | 0.0 | 5.7491 | | 1.4007 | 3.2989 | 15000 | 1.1010 | 0.0 | 5.8267 | | 1.3799 | 3.4088 | 15500 | 1.0754 | 0.0 | 5.7482 | | 1.401 | 3.5188 | 16000 | 1.0460 | 0.0 | 5.7571 | | 1.4093 | 3.6288 | 16500 | 1.0239 | 0.0 | 5.7262 | | 1.3997 | 3.7387 | 17000 | 1.0024 | 0.0 | 5.692 | | 1.4162 | 3.8487 | 17500 | 0.9869 | 0.0 | 5.7273 | | 1.4102 | 3.9587 | 18000 | 0.9558 | 0.0 | 5.7613 | | 1.2476 | 4.0686 | 18500 | 0.9296 | 0.0 | 5.7113 | | 1.1591 | 4.1786 | 19000 | 0.9163 | 0.0 | 5.7651 | | 1.1861 | 4.2885 | 19500 | 0.9017 | 0.0 | 5.7498 | | 1.1799 | 4.3985 | 20000 | 0.8841 | 0.0 | 5.7884 | | 1.1902 | 4.5085 | 20500 | 0.8635 | 0.0 | 5.7613 | | 1.193 | 4.6184 | 21000 | 0.8448 | 0.0 | 5.7507 | | 1.1955 | 4.7284 | 21500 | 0.8266 | 0.0 | 5.7602 | | 1.2062 | 4.8384 | 22000 | 0.8069 | 0.0 | 5.7562 | | 1.2058 | 4.9483 | 22500 | 0.7805 | 0.0 | 5.7087 | | 1.0832 | 5.0583 | 23000 | 0.7583 | 0.0 | 5.7631 | | 0.9869 | 5.1682 | 23500 | 0.7497 | 0.0 | 5.7284 | | 0.9956 | 5.2782 | 24000 | 0.7356 | 0.0 | 5.7438 | | 1.0164 | 5.3882 | 24500 | 0.7253 | 0.0 | 5.7789 | | 1.017 | 5.4981 | 25000 | 0.7075 | 0.0 | 5.7462 | | 1.0365 | 5.6081 | 25500 | 0.6890 | 0.0 | 5.7487 | | 1.0421 | 5.7181 | 26000 | 0.6770 | 0.0 | 5.7547 | | 1.0344 | 5.8280 | 26500 | 0.6560 | 0.0 | 5.7624 | | 1.0286 | 5.9380 | 27000 | 0.6429 | 0.0 | 5.7816 | | 0.9637 | 6.0479 | 27500 | 0.6257 | 0.0 | 5.7547 | | 0.8297 | 6.1579 | 28000 | 0.6144 | 0.0 | 5.7649 | | 0.8625 | 6.2679 | 28500 | 0.6038 | 0.0 | 5.7442 | | 0.8587 | 6.3778 | 29000 | 0.5889 | 0.0 | 5.7633 | | 0.8732 | 6.4878 | 29500 | 0.5788 | 0.0 | 5.7676 | | 0.8738 | 6.5978 | 30000 | 0.5673 | 0.0 | 5.7698 | | 0.8938 | 6.7077 | 30500 | 0.5521 | 0.0 | 5.7929 | | 0.8797 | 6.8177 | 31000 | 0.5410 | 0.0 | 5.7542 | | 0.9055 | 6.9276 | 31500 | 0.5284 | 0.0 | 5.7551 | | 0.8408 | 7.0376 | 32000 | 0.5154 | 0.0 | 5.754 | | 0.7278 | 7.1476 | 32500 | 0.5106 | 0.0 | 5.7602 | | 0.7357 | 7.2575 | 33000 | 0.4958 | 0.0 | 5.7422 | | 0.7498 | 7.3675 | 33500 | 0.4906 | 0.0 | 5.734 | | 0.7524 | 7.4775 | 34000 | 0.4804 | 0.0 | 5.7136 | | 0.7609 | 7.5874 | 34500 | 0.4716 | 0.0 | 5.7504 | | 0.7555 | 7.6974 | 35000 | 0.4621 | 38.6861 | 5.7544 | | 0.7752 | 7.8073 | 35500 | 0.4493 | 0.0 | 5.7429 | | 0.7656 | 7.9173 | 36000 | 0.4387 | 0.0 | 5.7484 | | 0.7329 | 8.0273 | 36500 | 0.4281 | 0.0 | 5.7364 | | 0.6314 | 8.1372 | 37000 | 0.4251 | 0.0 | 5.7453 | | 0.6595 | 8.2472 | 37500 | 0.4161 | 0.0 | 5.7393 | | 0.6566 | 8.3572 | 38000 | 0.4125 | 0.0 | 5.7502 | | 0.6582 | 8.4671 | 38500 | 0.4043 | 0.0 | 5.7364 | | 0.6579 | 8.5771 | 39000 | 0.3962 | 0.0 | 5.7422 | | 0.6622 | 8.6870 | 39500 | 0.3878 | 0.0 | 5.76 | | 0.6547 | 8.7970 | 40000 | 0.3790 | 0.0 | 5.7642 | | 0.6682 | 8.9070 | 40500 | 0.3701 | 0.0 | 5.7549 | | 0.6499 | 9.0169 | 41000 | 0.3584 | 0.0 | 5.7333 | | 0.541 | 9.1269 | 41500 | 0.3547 | 0.0 | 5.7398 | | 0.5621 | 9.2369 | 42000 | 0.3519 | 0.0 | 5.7322 | | 0.5673 | 9.3468 | 42500 | 0.3458 | 0.0 | 5.7467 | | 0.5618 | 9.4568 | 43000 | 0.3407 | 0.0 | 5.7382 | | 0.5704 | 9.5667 | 43500 | 0.3326 | 0.0 | 5.7536 | | 0.5816 | 9.6767 | 44000 | 0.3292 | 0.0 | 5.7349 | | 0.5892 | 9.7867 | 44500 | 0.3194 | 0.0 | 5.7358 | | 0.5796 | 9.8966 | 45000 | 0.3129 | 0.0 | 5.7369 | | 0.5807 | 10.0066 | 45500 | 0.3079 | 0.0 | 5.7404 | | 0.4786 | 10.1166 | 46000 | 0.3033 | 0.0 | 5.7491 | | 0.4863 | 10.2265 | 46500 | 0.2989 | 0.0 | 5.7331 | | 0.4979 | 10.3365 | 47000 | 0.2968 | 0.0 | 5.732 | | 0.5015 | 10.4464 | 47500 | 0.2917 | 0.0 | 5.7229 | | 0.5105 | 10.5564 | 48000 | 0.2886 | 0.0 | 5.7398 | | 0.5039 | 10.6664 | 48500 | 0.2830 | 0.0 | 5.7173 | | 0.5202 | 10.7763 | 49000 | 0.2789 | 0.0 | 5.7218 | | 0.5123 | 10.8863 | 49500 | 0.2742 | 0.0 | 5.7276 | | 0.5043 | 10.9963 | 50000 | 0.2670 | 0.0 | 5.7191 | | 0.4314 | 11.1062 | 50500 | 0.2661 | 0.0 | 5.7364 | | 0.4345 | 11.2162 | 51000 | 0.2612 | 0.0 | 5.7262 | | 0.4411 | 11.3261 | 51500 | 0.2592 | 0.0 | 5.7233 | | 0.447 | 11.4361 | 52000 | 0.2568 | 0.0 | 5.7344 | | 0.453 | 11.5461 | 52500 | 0.2528 | 0.0 | 5.7231 | | 0.4485 | 11.6560 | 53000 | 0.2496 | 0.0 | 5.7311 | | 0.4472 | 11.7660 | 53500 | 0.2460 | 0.0 | 5.7167 | | 0.4567 | 11.8760 | 54000 | 0.2412 | 0.0 | 5.7256 | | 0.4528 | 11.9859 | 54500 | 0.2381 | 0.0 | 5.7264 | | 0.404 | 12.0959 | 55000 | 0.2342 | 0.0 | 5.7187 | | 0.3995 | 12.2059 | 55500 | 0.2333 | 0.0 | 5.7293 | | 0.3989 | 12.3158 | 56000 | 0.2317 | 0.0 | 5.7104 | | 0.3988 | 12.4258 | 56500 | 0.2284 | 0.0 | 5.7242 | | 0.3991 | 12.5357 | 57000 | 0.2261 | 0.0 | 5.7276 | | 0.4075 | 12.6457 | 57500 | 0.2234 | 0.0 | 5.7198 | | 0.4074 | 12.7557 | 58000 | 0.2207 | 0.0 | 5.7262 | | 0.398 | 12.8656 | 58500 | 0.2178 | 0.0 | 5.7282 | | 0.4003 | 12.9756 | 59000 | 0.2162 | 0.0 | 5.7291 | | 0.374 | 13.0856 | 59500 | 0.2145 | 0.0 | 5.7271 | | 0.3749 | 13.1955 | 60000 | 0.2126 | 0.0 | 5.7287 | | 0.3589 | 13.3055 | 60500 | 0.2109 | 0.0 | 5.7356 | | 0.3734 | 13.4154 | 61000 | 0.2095 | 0.0 | 5.7329 | | 0.3706 | 13.5254 | 61500 | 0.2087 | 0.0 | 5.7327 | | 0.3781 | 13.6354 | 62000 | 0.2071 | 0.0 | 5.7296 | | 0.3735 | 13.7453 | 62500 | 0.2060 | 0.0 | 5.7287 | | 0.372 | 13.8553 | 63000 | 0.2039 | 0.0 | 5.718 | | 0.3751 | 13.9653 | 63500 | 0.2024 | 0.0 | 5.728 | | 0.3573 | 14.0752 | 64000 | 0.2014 | 0.0 | 5.7189 | | 0.3322 | 14.1852 | 64500 | 0.2010 | 0.0 | 5.7204 | | 0.3359 | 14.2951 | 65000 | 0.2003 | 0.0 | 5.7227 | | 0.3533 | 14.4051 | 65500 | 0.1994 | 0.0 | 5.7222 | | 0.3489 | 14.5151 | 66000 | 0.1986 | 0.0 | 5.7198 | | 0.3358 | 14.6250 | 66500 | 0.1981 | 0.0 | 5.7231 | | 0.3424 | 14.7350 | 67000 | 0.1977 | 0.0 | 5.72 | | 0.3341 | 14.8450 | 67500 | 0.1976 | 0.0 | 5.7209 | | 0.3513 | 14.9549 | 68000 | 0.1973 | 0.0 | 5.7184 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
amazingT/dqn-SpaceInvadersNoFrameskip-v4
amazingT
2024-05-24T07:23:31Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-05-23T09:16:21Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 686.00 +/- 241.94 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga amazingT -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga amazingT -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga amazingT ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 10000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
COCO0414/bge-m3-ko_DNF
COCO0414
2024-05-24T07:18:38Z
9
2
sentence-transformers
[ "sentence-transformers", "safetensors", "xlm-roberta", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-05-21T07:33:47Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # COCO0414/bge-m3-ko_DNF This model was created to use RAG. [bge-m3](https://huggingface.co/BAAI/bge-m3) based model, fitting Dungeon & Fighter data. Trained data, [Kakao_brain](https://github.com/kakaobrain/kor-nlu-datasets) koSTS, koNLI Data and then [Dungeon&Fighter](https://df.nexon.com) user community QA board, extract 79k QA pairs data in 20k articles and 13k QA data using ChatGPT based on [DFU](https://www.dnf-universe.com/) articles Fianl Model, bge-m3-ko_DNF is Mixed bge-m3 and trained model. Mixed reason detail read bge-m3 [Page](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) While the overall similarity has decreased, the discriminatory power between queries (questions) and chunks (answer search space) has increased, allowing for more accurate parsing of Dungeon & Fighters information. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer import torch question = "사도들에 대해서 알려줘" # Data from namuwiki reval_content = "제6 사도제7 사도제8 사도무형의 시로코검은 질병의 디레지에불을 먹는 안톤긴 발의 로터스제9 사도제10 사도제11 사도제12 사도(舊) 폭룡왕 바칼(新) 건설자 루크성안의 미카엘라혼돈의 오즈마세리아 키르민 같이 보기: 테아나(인공신)제1 사도 - 숙명의 카인. 죽음이 두려워하는 자.제2 사도 - 우는 눈의 힐더. 두 가지 얼굴에 찬연히 빛나는 이슬을 감춘 자.[A] 제3 사도 - 창공의 이시스-프레이 (프레이-이시스). 땅에 발을 딛지 않는 자.제4 사도 - 정복자 카시야스. 피로 강철을 적시는 자.제5 사도† - 무형의 시로코. 수 백의 얼굴을 가졌으되 보이지 않는 자. - (오리진 이전)록시에게 사망. (오리진 이후) 4인의 웨펀마스터에게 사망.[4]이후 그림시커가 사념을 모두 모아 다시 부활하고 하늘성을 점령한 뒤 이튼 공업지대가 있는 천계를 노리고 있다. 그러나 결국 미리 대기하고 있던 천계군에게 집중포화로 또 다시 사망.제6 사도 - 검은 질병의 디레지에. 더러운 피를 흘리는 자. - 노스마이어로 전이되어 모험가와 미쉘, 아젤리아에게 패배한 뒤 힐더의 계략으로 인해 차원의 틈으로 빨려들어가 온 몸이 갈갈이 찢겨 나가며 봉인. 설정상 불사이기에 소멸하지 않았고, 불안정한 차원 속에서 다시 힘을 기르는 중.제7 사도† - 불을 먹는 안톤. 몸을 뻗어 능히 세상 끝에 닿을 수 있는 자. - 파워스테이션에 전이되어 모험가와 천계 토벌군에게 사망.제8 사도† - 긴발의 로터스. 한번에 수천의 무기를 쥘 수 있는 자. - 베히모스로 전이되어 3인의 웨펀마스터들과 제국군, 모험가에게 사망.(舊)제9 사도† - 폭룡왕 바칼. 불의 숨결을 내는 자. - 500년 전 마계에서 도망쳐 나와 천계를 지배하던 중 천계 저항군에 의해 사망.[5](新)제9 사도† - 건설자 루크. 말 없이 흙을 만지는 자. - 자신의 거주지인 죽은 자의 성에서 사망.[6]제10 사도 - 성안의 미카엘라. 진실을 꿰뚫어 보는 자.[7]제11 사도† - 혼돈의 오즈마. 죽음에서 일어난 자. 검은 대지에서 카잔에 의해 봉인 중이었으나, 이후 봉인이 완전히 풀려 날뛰다가 모험가들과의 결전으로 사망.제12 사도 - 세리아 키르민. 비밀을 알고 있는 자.[A]현재 플레인 : 아라드의 이야기 진행 상황에서 살아있는 사도는 카인, 힐더, 이시스-프레이, 카시야스, 디레지에, 미카엘라, 세리아까지 총 7명이다.사도의 순위는 마계에 올라탄 순서이긴 한데 서열에도 영향이 있긴 한지 제4사도 카시야스의 경우 구 스토리에서는 서열 4위의 사도라고 한다. 미카엘라부터는 마계랑 인연이 없는 사도이다.3. 모티브[편집]유래는 예수의 열두 제자인 사도이다. 호칭도 같은데다가 마침 숫자도 딱 열둘이고 그들 중 하나가 내쳐지고 다른 이가 임명된 일까지 예수의 열두 사도랑 딱 들어맞는다.[9] 창세기, 종세기, 창신세기로 이루어진 던파의 성서도 성경을 모티브로 한 것이므로 어찌 보면 자연스러운 연결이다.또한, 각 사도들에 반영된 개별적 모티브는 다음과 같다. 언급되지 않은 사도는 네오플에서 독자적으로 창작한 케이스다.카인: 성경(아담, 카인)힐더: 성경(하와)이시스-프레이/프레이-이시스: 슬라브 신화(벨로보그, 체르노보그), 북유럽 신화(프레이), 이집트 신화(이시스) [10]카시야스: 일본 요괴(오니)시로코: 사하라 사막의 열풍(시로코)[11]디레지에: 질병(흑사병 등)로터스: 바다괴물(크라켄)바칼: 악마성 드라큘라(드라큘라 블라드 체페슈)루크: 켈트 신화(루 라와더)미카엘라: 가톨릭(메시아)오즈마: 악마성 드라큘라(드라큘라 블라드 체페슈)세리아 키르민: ???3.1. 문양[편집]사도에겐 자신을 상징하는 고유의 문양이 존재하며 사도와 관련된 사건이 일어남에 따라 문양이 변경되는 경우도 있다.[12]카인은 그를 두려워하고 경외하는 자들에 의해서 문양이 만들어졌다. 힐더는 자신이 가진 우는 눈을 형상화하여 문양을 만들었다. 테라코타에서는 이 문양을 신성하게 여긴다. 이시스-프레이는 그를 따르는 추종자들에 의해서 문양이 만들어졌으며, 밤의 마천루 전투 이후에 이시스의 상징이[13] 추가" model = SentenceTransformer('COCO0414/bge-m3-ko_DNF') question_embedding = model.encode(question, convert_to_tensor=True) answer_embedding = model.encode(answer, convert_to_tensor=True) similarity = torch.nn.functional.cosine_similarity(question_embedding, answer_embedding, dim=0) print(similarity.item()) # 0.6637 ```
gonnylee/testModel35
gonnylee
2024-05-24T07:16:40Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-24T07:16:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DarrenChensformer/llava-1.5-7b-hf-vsft
DarrenChensformer
2024-05-24T07:13:21Z
3
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:llava-hf/llava-1.5-7b-hf", "base_model:adapter:llava-hf/llava-1.5-7b-hf", "region:us" ]
null
2024-05-24T07:12:01Z
--- library_name: peft tags: - trl - sft - generated_from_trainer base_model: llava-hf/llava-1.5-7b-hf model-index: - name: llava-1.5-7b-hf-vsft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llava-1.5-7b-hf-vsft This model is a fine-tuned version of [llava-hf/llava-1.5-7b-hf](https://huggingface.co/llava-hf/llava-1.5-7b-hf) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.4e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 32 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 ### Training results ### Framework versions - PEFT 0.11.1 - Transformers 4.37.2 - Pytorch 2.1.2+cu121 - Datasets 2.19.1 - Tokenizers 0.15.1
SunJack/mistra
SunJack
2024-05-24T07:09:06Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-24T07:09:06Z
--- license: apache-2.0 ---
QuantFactory/Hermes-2-Theta-Llama-3-8B-GGUF
QuantFactory
2024-05-24T07:05:40Z
115
0
null
[ "gguf", "Llama-3", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "axolotl", "merges", "text-generation", "en", "dataset:teknium/OpenHermes-2.5", "base_model:NousResearch/Hermes-2-Theta-Llama-3-8B", "base_model:quantized:NousResearch/Hermes-2-Theta-Llama-3-8B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-05-24T05:00:49Z
--- base_model: NousResearch/Hermes-2-Theta-Llama-3-8B tags: - Llama-3 - instruct - finetune - chatml - DPO - RLHF - gpt4 - synthetic data - distillation - function calling - json mode - axolotl - merges model-index: - name: Hermes-2-Pro-Llama-3-Instruct-8B-Merge results: [] language: - en datasets: - teknium/OpenHermes-2.5 widget: - example_title: Hermes 2 Pro Llama-3 Instruct Merge messages: - role: system content: >- You are a sentient, superintelligent artificial general intelligence, here to teach and assist me. - role: user content: >- Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world. license: apache-2.0 pipeline_tag: text-generation --- # - Hermes-2 Θ Llama-3 8B-GGUF - This is quantized version of [NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B) created using llama.cpp ## Model Description Hermes-2 Θ (Theta) is the first experimental merged model released by [Nous Research](https://nousresearch.com/), in collaboration with Charles Goddard at [Arcee](https://www.arcee.ai/), the team behind MergeKit. Hermes-2 Θ is a merged and then further RLHF'ed version our excellent Hermes 2 Pro model and Meta's Llama-3 Instruct model to form a new model, Hermes-2 Θ, combining the best of both worlds of each model. ## Example Outputs ### Create New Mythos: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/dvKhnSvHdx4nTQIqB9Lpv.png) ### Chat with a Meta-Cognitive Entity ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/GwdCqowE6GQylineqehhx.png) ### Ask for a structured JSON output: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/us72aL9gwUXdqSHetRVRV.png) # Prompt Format Hermes 2 Θ uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue. System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model. This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns. This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI. Prompt with system instruction (Use whatever system prompt you like, this is just an example!): ``` <|im_start|>system You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|> <|im_start|>user Hello, who are you?<|im_end|> <|im_start|>assistant Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|> ``` This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the `tokenizer.apply_chat_template()` method: ```python messages = [ {"role": "system", "content": "You are Hermes 2."}, {"role": "user", "content": "Hello, who are you?"} ] gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt") model.generate(**gen_input) ``` When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure that the model continues with an assistant response. To utilize the prompt format without a system prompt, simply leave the line out. ## Prompt Format for Function Calling Our model was trained on specific system prompts and structures for Function Calling. While the system prompt looks complicated, we have created a GitHub repo containing code to easily build these based on real python functions. You should use the system role with this message, followed by a function signature json as this example shows here. ``` <|im_start|>system You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows: <tool_call> {"arguments": <args-dict>, "name": <function-name>} </tool_call><|im_end|> ``` To complete the function call, create a user prompt that follows the above system prompt, like so: ``` <|im_start|>user Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|> ``` The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling): ``` <|im_start|>assistant <tool_call> {"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"} </tool_call><|im_end|> ``` Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so: ``` <|im_start|>tool <tool_response> {"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}} </tool_response> <|im_end|> ``` The assistant will then read in that data from the function's response, and generate a natural language response: ``` <|im_start|>assistant The stock fundamentals data for Tesla (TSLA) are as follows: - **Symbol**: TSLA - **Company Name**: Tesla, Inc. - **Sector**: Consumer Cyclical - **Industry**: Auto Manufacturers - **Market Capitalization**: $566,160,130,480 - **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73 - **Price-to-Book Ratio (PB Ratio)**: 9.04 - **Dividend Yield**: N/A - **Trailing Earnings Per Share (EPS)**: $4.3 - **Beta Value of the Stock**: 2.42 - **52-Week High Price of the Stock**: $299.29 - **52-Week Low Price of the Stock**: $152.37 This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|> ``` ## Prompt Format for JSON Mode / Structured Outputs Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema. Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main ``` <|im_start|>system You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|> ``` Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON. # Benchmarks ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/suBbCUIxpcRvhCv6-DBDQ.png) ## GPT4All: ``` |    Task     |Version| Metric |Value |   |Stderr| |-------------|------:|--------|-----:|---|-----:| |arc_challenge|      0|acc     |0.5529|±  |0.0145| |             |       |acc_norm|0.5870|±  |0.0144| |arc_easy     |      0|acc     |0.8371|±  |0.0076| |             |       |acc_norm|0.8144|±  |0.0080| |boolq        |      1|acc     |0.8599|±  |0.0061| |hellaswag    |      0|acc     |0.6133|±  |0.0049| |             |       |acc_norm|0.7989|±  |0.0040| |openbookqa   |      0|acc     |0.3940|±  |0.0219| |             |       |acc_norm|0.4680|±  |0.0223| |piqa         |      0|acc     |0.8063|±  |0.0092| |             |       |acc_norm|0.8156|±  |0.0090| |winogrande   |      0|acc     |0.7372|±  |0.0124| ``` Average: 72.59 ## AGIEval: ``` |             Task             |Version| Metric |Value |   |Stderr| |------------------------------|------:|--------|-----:|---|-----:| |agieval_aqua_rat              |      0|acc     |0.2441|±  |0.0270| |                              |       |acc_norm|0.2441|±  |0.0270| |agieval_logiqa_en             |      0|acc     |0.3687|±  |0.0189| |                              |       |acc_norm|0.3840|±  |0.0191| |agieval_lsat_ar               |      0|acc     |0.2304|±  |0.0278| |                              |       |acc_norm|0.2174|±  |0.0273| |agieval_lsat_lr               |      0|acc     |0.5471|±  |0.0221| |                              |       |acc_norm|0.5373|±  |0.0221| |agieval_lsat_rc               |      0|acc     |0.6617|±  |0.0289| |                              |       |acc_norm|0.6357|±  |0.0294| |agieval_sat_en                |      0|acc     |0.7670|±  |0.0295| |                              |       |acc_norm|0.7379|±  |0.0307| |agieval_sat_en_without_passage|      0|acc     |0.4417|±  |0.0347| |                              |       |acc_norm|0.4223|±  |0.0345| |agieval_sat_math              |      0|acc     |0.4000|±  |0.0331| |                              |       |acc_norm|0.3455|±  |0.0321| ``` Average: 44.05 ## BigBench: ``` |                      Task                      |Version|       Metric        |Value |   |Stderr| |------------------------------------------------|------:|---------------------|-----:|---|-----:| |bigbench_causal_judgement                       |      0|multiple_choice_grade|0.6000|±  |0.0356| |bigbench_date_understanding                     |      0|multiple_choice_grade|0.6585|±  |0.0247| |bigbench_disambiguation_qa                      |      0|multiple_choice_grade|0.3178|±  |0.0290| |bigbench_geometric_shapes                       |      0|multiple_choice_grade|0.2340|±  |0.0224| |                                                |       |exact_str_match      |0.0000|±  |0.0000| |bigbench_logical_deduction_five_objects         |      0|multiple_choice_grade|0.2980|±  |0.0205| |bigbench_logical_deduction_seven_objects        |      0|multiple_choice_grade|0.2057|±  |0.0153| |bigbench_logical_deduction_three_objects        |      0|multiple_choice_grade|0.5367|±  |0.0288| |bigbench_movie_recommendation                   |      0|multiple_choice_grade|0.4040|±  |0.0220| |bigbench_navigate                               |      0|multiple_choice_grade|0.4970|±  |0.0158| |bigbench_reasoning_about_colored_objects        |      0|multiple_choice_grade|0.7075|±  |0.0102| |bigbench_ruin_names                             |      0|multiple_choice_grade|0.4821|±  |0.0236| |bigbench_salient_translation_error_detection    |      0|multiple_choice_grade|0.2295|±  |0.0133| |bigbench_snarks                                 |      0|multiple_choice_grade|0.6906|±  |0.0345| |bigbench_sports_understanding                   |      0|multiple_choice_grade|0.5375|±  |0.0159| |bigbench_temporal_sequences                     |      0|multiple_choice_grade|0.6270|±  |0.0153| |bigbench_tracking_shuffled_objects_five_objects |      0|multiple_choice_grade|0.2216|±  |0.0118| |bigbench_tracking_shuffled_objects_seven_objects|      0|multiple_choice_grade|0.1594|±  |0.0088| |bigbench_tracking_shuffled_objects_three_objects|      0|multiple_choice_grade|0.5367|±  |0.0288| ``` Average: 44.13 **IFEval**: 72.64 **MT_Bench**: Turn 1 - 8.3875, Turn 2 - 8.00625, Average - 8.196875 # Inference Code Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM) Note: To use function calling, you should see the github repo above. ```python # Code to inference Hermes with HF Transformers # Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages import torch from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM import bitsandbytes, flash_attn tokenizer = AutoTokenizer.from_pretrained('NousResearch/Hermes-2-Theta-Llama-3-8B', trust_remote_code=True) model = LlamaForCausalLM.from_pretrained( "NousResearch/Hermes-2-Theta-Llama-3-8B", torch_dtype=torch.float16, device_map="auto", load_in_8bit=False, load_in_4bit=True, use_flash_attention_2=True ) prompts = [ """<|im_start|>system You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|> <|im_start|>user Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|> <|im_start|>assistant""", ] for chat in prompts: print(chat) input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda") generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id) response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True) print(f"Response: {response}") ``` ## Inference Code for Function Calling: All code for utilizing, parsing, and building function calling templates is available on our github: [https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/oi4CiGh50xmoviUQnh8R3.png) # Chat Interfaces When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box. In LM-Studio, simply select the ChatML Prefix on the settings side pane: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ls6WqV-GSxMw2RA3GuQiN.png)
nicolauduran45/patstat-citation-parser
nicolauduran45
2024-05-24T07:04:27Z
108
0
transformers
[ "transformers", "safetensors", "distilbert", "token-classification", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-05-23T14:46:38Z
--- license: mit widget: - text: "Na et al., 'Monitoring of peptide acylation inside degrading PLGA microspheres by capillary electrophoresis and MALDI-T of mass spectrometry', Journal of Controlled Release, 2003, 291-299, 92, Elsevier B.V." example_title: "Example 1" - text: "Ryuichiro Higashinaka et al.: “Automatically Acquiring Causal Expression Patterns from Relation-annotated Corpora to Improve Question Answering for why-Questions”, ACM Transactions on Asian Language Information Processing, Association for Computing Machinery, New York, NY, US, vol. 7, No. 2, Apr. 1, 2008, pp. 1-29, XP058 I 46396, ISSN: 1530-0226. DOI: 10.1145/1362782.1362785." example_title: "Example 2" - text: "James W. Stave and Klaus Lindpaintner, Antibody and Antigen Contact Residues Define Epitope and Paratope Size and Structure, The Journal of Immunlogy, Published Jun. 24, 2013, doi:10.4049/jimmunol.1203198." example_title: "Example 3" ---
eeeyounglee/EEVE-10.8B-Dense-Finetune-2
eeeyounglee
2024-05-24T06:57:50Z
4
0
sentence-transformers
[ "sentence-transformers", "safetensors", "llama", "feature-extraction", "sentence-similarity", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-05-24T06:55:21Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # eeeyounglee/EEVE-10.8B-Dense-Finetune-2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('eeeyounglee/EEVE-10.8B-Dense-Finetune-2') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=eeeyounglee/EEVE-10.8B-Dense-Finetune-2) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 309 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `__main__.MultipleNegativesRankingLoss_with_logging` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 1000, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 31, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: LlamaModel (1): Pooling({'word_embedding_dimension': 4096, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Dense({'in_features': 4096, 'out_features': 1024, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Sorour/cls_headline_test_v1
Sorour
2024-05-24T06:57:34Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:adapter:meta-llama/Meta-Llama-3-8B", "license:llama3", "region:us" ]
null
2024-05-24T06:52:20Z
--- license: llama3 library_name: peft tags: - trl - sft - generated_from_trainer base_model: meta-llama/Meta-Llama-3-8B datasets: - generator model-index: - name: cls_headline_test_v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cls_headline_test_v1 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 0.7476 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.7916 | 0.4420 | 20 | 0.7744 | | 0.7731 | 0.8840 | 40 | 0.7476 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1