modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-05-30 06:27:13
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
459 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-05-30 06:25:49
card
stringlengths
11
1.01M
komiljon123/Neon
komiljon123
2024-02-08T08:59:29Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2024-02-08T08:59:29Z
--- license: bigscience-openrail-m ---
magarcd/intel-image-classification
magarcd
2024-02-08T08:55:09Z
0
0
fastai
[ "fastai", "region:us" ]
null
2023-02-17T08:18:56Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
RapidOrc121/BERT_sentiment_analysis
RapidOrc121
2024-02-08T08:43:36Z
10
2
bertopic
[ "bertopic", "safetensors", "distilbert", "text-classification", "en", "dataset:carblacac/twitter-sentiment-analysis", "region:us" ]
text-classification
2024-01-26T17:56:23Z
--- datasets: - carblacac/twitter-sentiment-analysis language: - en library_name: bertopic pipeline_tag: text-classification --- LABEL_0="sadness" LABEL_1="joy" LABEL_2="love" LABEL_3="anger" LABEL_4="fear" LABEL_5="surprise"
arun100/whisper-base-ar-1
arun100
2024-02-08T08:37:05Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "whisper-event", "generated_from_trainer", "ar", "dataset:mozilla-foundation/common_voice_16_0", "base_model:openai/whisper-base", "base_model:finetune:openai/whisper-base", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-02-07T19:03:58Z
--- language: - ar license: apache-2.0 base_model: openai/whisper-base tags: - whisper-event - generated_from_trainer datasets: - mozilla-foundation/common_voice_16_0 metrics: - wer model-index: - name: Whisper Base Arabic results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: mozilla-foundation/common_voice_16_0 ar type: mozilla-foundation/common_voice_16_0 config: ar split: test args: ar metrics: - name: Wer type: wer value: 80.47772163527792 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Base Arabic This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the mozilla-foundation/common_voice_16_0 ar dataset. It achieves the following results on the evaluation set: - Loss: 0.5856 - Wer: 80.4777 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 10000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.7392 | 1.53 | 500 | 0.8623 | 100.8133 | | 0.5938 | 3.07 | 1000 | 0.7397 | 93.6651 | | 0.5388 | 4.6 | 1500 | 0.6953 | 92.3005 | | 0.4982 | 6.13 | 2000 | 0.6682 | 88.9392 | | 0.4795 | 7.67 | 2500 | 0.6512 | 90.1524 | | 0.4483 | 9.2 | 3000 | 0.6373 | 87.1234 | | 0.4374 | 10.74 | 3500 | 0.6261 | 85.3144 | | 0.4331 | 12.27 | 4000 | 0.6179 | 86.4290 | | 0.4125 | 13.8 | 4500 | 0.6106 | 83.2865 | | 0.3984 | 15.34 | 5000 | 0.6059 | 83.0676 | | 0.4035 | 16.87 | 5500 | 0.6008 | 82.2165 | | 0.3997 | 18.4 | 6000 | 0.5970 | 81.1195 | | 0.3878 | 19.94 | 6500 | 0.5941 | 81.7153 | | 0.3827 | 21.47 | 7000 | 0.5906 | 81.2559 | | 0.3785 | 23.01 | 7500 | 0.5892 | 81.0506 | | 0.372 | 24.54 | 8000 | 0.5882 | 81.4248 | | 0.3655 | 26.07 | 8500 | 0.5865 | 81.0479 | | 0.3697 | 27.61 | 9000 | 0.5856 | 80.4777 | | 0.3658 | 29.14 | 9500 | 0.5849 | 80.6128 | | 0.3539 | 30.67 | 10000 | 0.5848 | 80.6696 | ### Framework versions - Transformers 4.37.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.16.2.dev0 - Tokenizers 0.15.0
mertllc/mms-tts-tur-fifties_female
mertllc
2024-02-08T08:32:14Z
18
0
transformers
[ "transformers", "safetensors", "vits", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-08T08:29:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sarahai/ru-sum
sarahai
2024-02-08T08:31:52Z
263
2
transformers
[ "transformers", "safetensors", "mt5", "text2text-generation", "summarizer", "text-generation-inference", "summarization", "ru", "dataset:IlyaGusev/gazeta", "base_model:google/mt5-base", "base_model:finetune:google/mt5-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2024-02-06T12:28:21Z
--- license: apache-2.0 language: - ru library_name: transformers base_model: google/mt5-base tags: - summarizer - text-generation-inference datasets: - IlyaGusev/gazeta pipeline_tag: summarization widget: - text: >- В понедельник в Санкт-Петербургском гарнизонном военном суде начались слушания по делу бывшего капитана ФСБ Ивана Круглова. Его обвиняют по ч. 4 статьи 111 УК РФ (умышленное причинение тяжкого вреда здоровью, повлекшее по неосторожности смерть потерпевшего). В прошлом году экс-силовик, не будучи при исполнении служебных обязанностей, застрелил из травматического пистолета случайного прохожего — жителя Петербурга Звиада Хачатуряна, который позднее скончался. В начале заседания сторона подсудимого ходатайствовала перед судом, чтобы сделать процесс полностью закрытым. Адвокат Круглова Лев Кожохин мотивировал ходатайство тем, что в качестве свидетелей привлечены несколько действующих сотрудников ФСБ, а следовательно, могут быть разглашены факты, имеющие отношение к государственной тайне. Однако судья Виталий Краснощеков удовлетворил просьбу частично: заседания будут закрытыми только при допросе сотрудников ФСБ и при обсуждении секретной информации. example_title: Summarization Example 1 --- This is fine-tuned form of google/mt5-base model used as Russian text summarizer, trained on ~50k samples' dataset. Updates are coming soon. Target is to improve the quality, length and accuracy. Example Usage: ```python model_name = "sarahai/ru-sum" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSeq2SeqLM.from_pretrained(model_name) device = torch.device("cpu") #if you are using cpu input_text = "текст на русском" #your input in russian input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(device) outputs = model.generate(input_ids, max_length=100, min_length=50, length_penalty=2.0, num_beams=4, early_stopping=True) #change according to your preferences summary = tokenizer.decode(outputs[0], skip_special_tokens=True) print(summary) ``` References Hugging Face Model Hub T5 Paper Disclaimer: The model's performance may be influenced by the quality and representativeness of the data it was fine-tuned on. Users are encouraged to assess the model's suitability for their specific applications and datasets.
DrishtiSharma/phi2-english-to-hinglish-translation
DrishtiSharma
2024-02-08T08:24:31Z
4
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:microsoft/phi-2", "base_model:adapter:microsoft/phi-2", "license:mit", "region:us" ]
null
2024-02-07T20:23:18Z
--- license: mit library_name: peft tags: - generated_from_trainer base_model: microsoft/phi-2 model-index: - name: phi2-english-to-hinglish-translation results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phi2-english-to-hinglish-translation This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3394 - Rouge Scores: {'rouge1': 0.02194963696306387, 'rouge2': 0.017844397420545253, 'rougeL': 0.017985463648805815, 'rougeLsum': 0.02198801722885821} - Bleu Scores: [0.0141983812922229, 0.013783602019353523, 0.013237039007079092, 0.012647324457245113] - Gen Len: 2048.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge Scores | Bleu Scores | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------:|:-------:| | 1.6688 | 1.0 | 500 | 1.4150 | {'rouge1': 0.021944939879946292, 'rouge2': 0.017781155558600512, 'rougeL': 0.017866554441667286, 'rougeLsum': 0.02197862373873669} | [0.014214089766333284, 0.013807603949625002, 0.013250971870467268, 0.012646602626664907] | 2048.0 | | 1.2148 | 2.0 | 1000 | 1.3394 | {'rouge1': 0.02194963696306387, 'rouge2': 0.017844397420545253, 'rougeL': 0.017985463648805815, 'rougeLsum': 0.02198801722885821} | [0.0141983812922229, 0.013783602019353523, 0.013237039007079092, 0.012647324457245113] | 2048.0 | ### Framework versions - PEFT 0.8.2 - Transformers 4.38.0.dev0 - Pytorch 2.1.0+cu118 - Datasets 2.16.2.dev0 - Tokenizers 0.15.1
madhiarasan/hr_qna
madhiarasan
2024-02-08T08:21:46Z
4
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:tiiuae/falcon-7b-instruct", "base_model:adapter:tiiuae/falcon-7b-instruct", "region:us" ]
null
2024-02-08T08:21:44Z
--- library_name: peft base_model: tiiuae/falcon-7b-instruct --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.8.2
DrishtiSharma/llama2-7b-english-to-hinglish-translation
DrishtiSharma
2024-02-08T08:21:46Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:NousResearch/Llama-2-7b-hf", "base_model:adapter:NousResearch/Llama-2-7b-hf", "region:us" ]
null
2024-02-05T14:11:22Z
--- library_name: peft tags: - generated_from_trainer base_model: NousResearch/Llama-2-7b-hf model-index: - name: llama2-7b-english-to-hinglish-translation results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama2-7b-english-to-hinglish-translation This model is a fine-tuned version of [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7508 - Rouge Scores: {'rouge1': 0.9207934134490793, 'rouge2': 0.8268216875143521, 'rougeL': 0.863418556340243, 'rougeLsum': 0.9207165318568765} - Bleu Scores: [0.9430535279899742, 0.9289517504059885, 0.9111307023404618, 0.8922236591496603] - Gen Len: 2048.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge Scores | Bleu Scores | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:----------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------:|:-------:| | 0.8283 | 1.0 | 500 | 0.7644 | {'rouge1': 0.921717672607307, 'rouge2': 0.8269254584175559, 'rougeL': 0.8617480706939217, 'rougeLsum': 0.9216499826848323} | [0.9428124451183093, 0.9288838577090098, 0.910999858543974, 0.8919623155075178] | 2048.0 | | 0.5824 | 2.0 | 1000 | 0.7508 | {'rouge1': 0.9207934134490793, 'rouge2': 0.8268216875143521, 'rougeL': 0.863418556340243, 'rougeLsum': 0.9207165318568765} | [0.9430535279899742, 0.9289517504059885, 0.9111307023404618, 0.8922236591496603] | 2048.0 | ### Framework versions - PEFT 0.8.2 - Transformers 4.38.0.dev0 - Pytorch 2.1.0+cu118 - Datasets 2.16.2.dev0 - Tokenizers 0.15.1
DrishtiSharma/mixtral-8x7b-v0.1-english-to-hinglish-translation
DrishtiSharma
2024-02-08T08:20:00Z
2
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:mistralai/Mixtral-8x7B-v0.1", "base_model:adapter:mistralai/Mixtral-8x7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-02-07T09:27:11Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: mistralai/Mixtral-8x7B-v0.1 model-index: - name: mixtral-8x7b-v0.1-english-to-hinglish-translation results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mixtral-8x7b-v0.1-english-to-hinglish-translation This model is a fine-tuned version of [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0769 - Rouge Scores: {'rouge1': 0.9045408202972536, 'rouge2': 0.795425441228359, 'rougeL': 0.8399846297860634, 'rougeLsum': 0.9043739034131012} - Bleu Scores: [0.0002881182166187815, 0.0002842750061873772, 0.0002764768847375588, 0.00026750640347869873] - Gen Len: 2048.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge Scores | Bleu Scores | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-----------------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------:|:--------:| | 1.1771 | 1.0 | 500 | 1.0579 | {'rouge1': 0.9070255400902434, 'rouge2': 0.7976770190068221, 'rougeL': 0.8400261479965636, 'rougeLsum': 0.9069363147075731} | [0.00028395954091190866, 0.0002796973368739713, 0.0002722057765709132, 0.000263740024418467] | 2047.996 | | 0.7788 | 2.0 | 1000 | 1.0769 | {'rouge1': 0.90.45408202972536, 'rouge2': 0.795425441228359, 'rougeL': 0.8399846297860634, 'rougeLsum': 0.9043739034131012} | [0.0002881182166187815, 0.0002842750061873772, 0.0002764768847375588, 0.00026750640347869873] | 2048.0 | ### Framework versions - PEFT 0.8.2 - Transformers 4.38.0.dev0 - Pytorch 2.1.0+cu118 - Datasets 2.16.2.dev0 - Tokenizers 0.15.1
humung/polyglot-ko-12.8b-vlending-v0.6
humung
2024-02-08T08:19:35Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-08T08:19:31Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
anish005/mistral-reddit
anish005
2024-02-08T08:09:39Z
60
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-02-08T06:40:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RajuEEE/LlaMa_FineTunedModel
RajuEEE
2024-02-08T08:00:51Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-08T08:00:43Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
yangswei/visual-emotion-classification
yangswei
2024-02-08T07:44:31Z
857
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-02-08T06:51:26Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: image_classification results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.58125 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # image_classification This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.1599 - Accuracy: 0.5813 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 13 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 40 | 1.8887 | 0.35 | | No log | 2.0 | 80 | 1.5494 | 0.425 | | No log | 3.0 | 120 | 1.4015 | 0.5188 | | No log | 4.0 | 160 | 1.2919 | 0.55 | | No log | 5.0 | 200 | 1.2205 | 0.5813 | | No log | 6.0 | 240 | 1.2246 | 0.575 | | No log | 7.0 | 280 | 1.2053 | 0.5312 | | No log | 8.0 | 320 | 1.1487 | 0.5687 | | No log | 9.0 | 360 | 1.1727 | 0.5437 | | No log | 10.0 | 400 | 1.1459 | 0.55 | | No log | 11.0 | 440 | 1.1313 | 0.5813 | | No log | 12.0 | 480 | 1.0990 | 0.6062 | | 1.1138 | 13.0 | 520 | 1.1020 | 0.6188 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
naivedhya/ajjubhai
naivedhya
2024-02-08T07:44:03Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2024-02-08T07:44:03Z
--- license: bigscience-openrail-m ---
wonderra/wonderra76
wonderra
2024-02-08T07:38:03Z
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2024-02-08T07:38:03Z
--- license: bigscience-bloom-rail-1.0 ---
JiajingChen/3
JiajingChen
2024-02-08T07:19:20Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-02-07T21:03:29Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: '3' results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 6.47 +/- 10.92 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
shuvom/churn-cl-v1
shuvom
2024-02-08T07:16:05Z
0
0
keras
[ "keras", "tf-keras", "binary-classification", "tensorflow", "region:us" ]
null
2024-02-08T07:16:03Z
--- library_name: keras tags: - binary-classification - keras - tensorflow --- ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: | Hyperparameters | Value | | :-- | :-- | | name | Adam | | weight_decay | None | | clipnorm | None | | global_clipnorm | None | | clipvalue | None | | use_ema | False | | ema_momentum | 0.99 | | ema_overwrite_frequency | None | | jit_compile | True | | is_legacy_optimizer | False | | learning_rate | 0.0010000000474974513 | | beta_1 | 0.9 | | beta_2 | 0.999 | | epsilon | 1e-07 | | amsgrad | False | | training_precision | float32 | ## Model Plot <details> <summary>View Model Plot</summary> ![Model Image](./model.png) </details>
rygielcorpuz/temoc
rygielcorpuz
2024-02-08T07:12:38Z
6
1
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "region:us" ]
text-to-image
2023-12-16T06:40:11Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: temoc flexing output: url: images/image3.png - text: temoc suit output: url: images/image2.png - text: temoc kicking output: url: images/image1.png base_model: runwayml/stable-diffusion-v1-5 instance_prompt: null --- # temoc <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/rygielcorpuz/temoc/tree/main) them in the Files & versions tab.
souvenger/NLP2Linux
souvenger
2024-02-08T07:09:20Z
6
0
setfit
[ "setfit", "safetensors", "mpnet", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:sentence-transformers/paraphrase-mpnet-base-v2", "base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2", "model-index", "region:us" ]
text-classification
2024-02-08T07:09:07Z
--- library_name: setfit tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer metrics: - accuracy widget: - text: Upgrade all installed packages with superuser privileges - text: Install package 'vim' as superuser - text: Remove package 'firefox' with superuser privileges - text: Change permissions of directory 'docs' to writable - text: Update package lists using superuser privileges pipeline_tag: text-classification inference: true base_model: sentence-transformers/paraphrase-mpnet-base-v2 model-index: - name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2 results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 0.0 name: Accuracy --- # SetFit with sentence-transformers/paraphrase-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 30 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:----------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------| | ls | <ul><li>'List all files and directories'</li><li>'Show files in the current directory'</li><li>'Display contents of the current directory'</li></ul> | | cd | <ul><li>'Change to the specified directory'</li><li>'Move to the home directory'</li><li>'Navigate to the specified directory path'</li></ul> | | mkdir docs | <ul><li>"Create a new directory named 'docs'"</li></ul> | | mkdir projects | <ul><li>"Make a directory named 'projects'"</li></ul> | | mkdir data | <ul><li>"Create a folder called 'data'"</li></ul> | | mkdir images | <ul><li>"Make a directory named 'images'"</li></ul> | | mkdir scripts | <ul><li>"Create a new folder named 'scripts'"</li></ul> | | rm example.txt | <ul><li>"Remove the file named 'example.txt'"</li></ul> | | rm temp.txt | <ul><li>"Delete the file called 'temp.txt'"</li></ul> | | rm file1 | <ul><li>"Remove the file named 'file1'"</li></ul> | | rm file2 | <ul><li>"Delete the file named 'file2'"</li></ul> | | rm backup.txt | <ul><li>"Remove the file named 'backup.txt'"</li></ul> | | cp file1 /destination | <ul><li>'Copy file1 to directory /destination'</li></ul> | | cp file2 /backup | <ul><li>'Duplicate file2 to directory /backup'</li></ul> | | cp file3 /archive | <ul><li>'Copy file3 to folder /archive'</li></ul> | | cp file4 /temp | <ul><li>'Duplicate file4 to folder /temp'</li></ul> | | cp file5 /images | <ul><li>'Copy file5 to directory /images'</li></ul> | | mv file2 /new_location | <ul><li>'Move file2 to directory /new_location'</li></ul> | | mv file3 /backup | <ul><li>'Transfer file3 to directory /backup'</li></ul> | | mv file4 /archive | <ul><li>'Move file4 to folder /archive'</li></ul> | | mv file5 /temp | <ul><li>'Transfer file5 to folder /temp'</li></ul> | | mv file6 /images | <ul><li>'Move file6 to directory /images'</li></ul> | | cat README.md | <ul><li>"Display the contents of file 'README.md'"</li></ul> | | cat notes.txt | <ul><li>"Show the content of file 'notes.txt'"</li></ul> | | cat data.csv | <ul><li>"Print the contents of file 'data.csv'"</li></ul> | | cat script.sh | <ul><li>"Display the content of file 'script.sh'"</li></ul> | | cat config.ini | <ul><li>"Show the contents of file 'config.ini'"</li></ul> | | grep 'pattern' data.txt | <ul><li>"Search for 'pattern' in file 'data.txt'"</li></ul> | | grep 'word' text.txt | <ul><li>"Find occurrences of 'word' in file 'text.txt'"</li></ul> | | grep 'keyword' document.txt | <ul><li>"Search for 'keyword' in file 'document.txt'"</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.0 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("souvenger/NLP2Linux") # Run inference preds = model("Install package 'vim' as superuser") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:-------|:----| | Word count | 5 | 5.6667 | 9 | | Label | Training Sample Count | |:----------------------------|:----------------------| | cat README.md | 1 | | cat config.ini | 1 | | cat data.csv | 1 | | cat notes.txt | 1 | | cat script.sh | 1 | | cd | 10 | | cp file1 /destination | 1 | | cp file2 /backup | 1 | | cp file3 /archive | 1 | | cp file4 /temp | 1 | | cp file5 /images | 1 | | grep 'keyword' document.txt | 1 | | grep 'pattern' data.txt | 1 | | grep 'word' text.txt | 1 | | ls | 10 | | mkdir data | 1 | | mkdir docs | 1 | | mkdir images | 1 | | mkdir projects | 1 | | mkdir scripts | 1 | | mv file2 /new_location | 1 | | mv file3 /backup | 1 | | mv file4 /archive | 1 | | mv file5 /temp | 1 | | mv file6 /images | 1 | | rm backup.txt | 1 | | rm example.txt | 1 | | rm file1 | 1 | | rm file2 | 1 | | rm temp.txt | 1 | ### Training Hyperparameters - batch_size: (8, 8) - num_epochs: (1, 1) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 20 - body_learning_rate: (2e-05, 2e-05) - head_learning_rate: 2e-05 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0042 | 1 | 0.1215 | - | | 0.2083 | 50 | 0.0232 | - | | 0.4167 | 100 | 0.01 | - | | 0.625 | 150 | 0.0044 | - | | 0.8333 | 200 | 0.0025 | - | ### Framework Versions - Python: 3.10.13 - SetFit: 1.0.3 - Sentence Transformers: 2.3.1 - Transformers: 4.37.0 - PyTorch: 2.1.2 - Datasets: 2.1.0 - Tokenizers: 0.15.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
GowthamMl/deepseeker-table-identification-v2
GowthamMl
2024-02-08T06:56:33Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-01-23T06:58:20Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nightdude/kanji-lora
nightdude
2024-02-08T06:56:12Z
0
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:CompVis/stable-diffusion-v1-4", "base_model:adapter:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-02-08T06:35:22Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - nightdude/kanji-lora These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were fine-tuned on the nightdude/sakana-kanji dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
yoon1000/TrOCR_0208-2
yoon1000
2024-02-08T06:54:16Z
33
0
transformers
[ "transformers", "tensorboard", "safetensors", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "base_model:microsoft/trocr-base-stage1", "base_model:finetune:microsoft/trocr-base-stage1", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-02-08T06:51:30Z
--- base_model: microsoft/trocr-base-stage1 tags: - generated_from_trainer model-index: - name: TrOCR_0208-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # TrOCR_0208-2 This model is a fine-tuned version of [microsoft/trocr-base-stage1](https://huggingface.co/microsoft/trocr-base-stage1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2584 - Cer: 0.1211 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.3873 | 1.71 | 500 | 1.6813 | 0.2361 | | 0.8298 | 3.42 | 1000 | 1.7390 | 0.2441 | | 0.5587 | 5.14 | 1500 | 1.5896 | 0.2090 | | 0.376 | 6.85 | 2000 | 1.4717 | 0.1775 | | 0.2847 | 8.56 | 2500 | 1.5528 | 0.1928 | | 0.2376 | 10.27 | 3000 | 1.4412 | 0.1727 | | 0.2101 | 11.99 | 3500 | 1.3770 | 0.1592 | | 0.2551 | 13.7 | 4000 | 1.4311 | 0.1564 | | 0.226 | 15.41 | 4500 | 1.2536 | 0.1337 | | 0.1365 | 17.12 | 5000 | 1.2753 | 0.1272 | | 0.14 | 18.84 | 5500 | 1.2584 | 0.1211 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.1+cu121 - Datasets 2.13.0 - Tokenizers 0.15.0
Jaerim/bloom-7b1-lora-tagger_3
Jaerim
2024-02-08T06:52:51Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-08T06:49:56Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AiKonWaR/poet3
AiKonWaR
2024-02-08T06:43:54Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-02-08T06:43:54Z
--- license: creativeml-openrail-m ---
mesolitica/Qwen1.5-0.5B-4096-fpf
mesolitica
2024-02-08T06:43:05Z
6
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "ms", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-07T06:13:40Z
--- language: - ms --- # Full Parameter Finetuning Qwen1.5 0.5B on Malaysian text README at https://github.com/huseinzol05/malaya/tree/5.1/session/qwen2 WandB, https://wandb.ai/huseinzol05/finetune-Qwen1.5-0.5B?workspace=user-huseinzol05
Basha738/llama2-supervised-ft-5epochs
Basha738
2024-02-08T06:34:13Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-02-08T06:30:17Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
arpanl/Fine-Tuned_Model2
arpanl
2024-02-08T06:29:41Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224", "base_model:finetune:google/vit-base-patch16-224", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-02-08T04:56:44Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224 tags: - generated_from_trainer datasets: - imagefolder model-index: - name: Fine-Tuned_Model2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Fine-Tuned_Model2 This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
RajuEEE/GPT2_FineTunedModel
RajuEEE
2024-02-08T06:26:55Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-08T06:26:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kg-09/autotrain-test
kg-09
2024-02-08T06:26:27Z
1
1
diffusers
[ "diffusers", "text-to-image", "autotrain", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
text-to-image
2024-02-08T06:26:19Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: <dr4wing> tags: - text-to-image - diffusers - autotrain inference: true --- # DreamBooth trained by AutoTrain Text encoder was not trained.
turgutburak01/ppo-SnowballTarget
turgutburak01
2024-02-08T06:26:08Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2024-02-08T06:26:05Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: turgutburak01/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
ssoh/llama-2-7b-all-strings
ssoh
2024-02-08T06:25:09Z
3
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-08T06:20:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
umuthopeyildirim/fin-rwkv-169M
umuthopeyildirim
2024-02-08T06:22:40Z
16
0
transformers
[ "transformers", "pytorch", "safetensors", "rwkv", "text-generation", "finance", "en", "dataset:gbharti/finance-alpaca", "arxiv:2305.13048", "arxiv:2307.08621", "arxiv:2302.10866", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-01-28T07:17:22Z
--- license: apache-2.0 datasets: - gbharti/finance-alpaca language: - en library_name: transformers tags: - finance widget: - text: "Is this headline positive or negative? Headline: Australian Tycoon Forrest Shuts Nickel Mines After Prices Crash." example_title: "Sentiment analysis" - text: "Aluminum price per KG is 50$. Forecast max: +1$ min:+0.3$. What should be the current price of aluminum?" example_title: "Forecast" --- # Fin-RWKV: Attention Free Financal Expert (WIP) Fin-RWKV is a cutting-edge, attention-free model designed specifically for financial analysis and prediction. Developed as part of a MindsDB Hackathon, this model leverages the simplicity and efficiency of the RWKV architecture to process financial data, providing insights and forecasts with remarkable accuracy. Fin-RWKV is tailored for professionals and enthusiasts in the finance sector who seek to integrate advanced deep learning techniques into their financial analyses. ## Use Cases - Sentiment analysis - Forecast - Product Pricing ## Features - Attention-Free Architecture: Utilizes the RWKV (Recurrent Weighted Kernel-based) model, which bypasses the complexity of attention mechanisms while maintaining high performance. - Lower Costs: 10x to over a 100x+ lower inference cost, 2x to 10x lower training cost - Tinyyyy: Lightweight enough to run on CPUs in real-time bypassing the GPU - and is able to run on your laptop today - Finance-Specific Training: Trained on the gbharti/finance-alpaca dataset, ensuring that the model is finely tuned for financial data analysis. - Transformers Library Integration: Built on the popular 'transformers' library, ensuring easy integration with existing ML pipelines and applications. ## Competing Against | Name | Param Count | Cost | Inference Cost | |---------------|-------------|------|----------------| | Fin-RWKV | 169M | $1.45 | Free on HuggingFace 🤗 & Low-End CPU | | [BloombergGPT](https://www.bloomberg.com/company/press/bloomberggpt-50-billion-parameter-llm-tuned-finance/) | 50 Billion | $1.3 million | Enterprise GPUs | | [FinGPT](https://huggingface.co/FinGPT) | 7 Bilion | $302.4 | Consumer GPUs | | Architecture | Status | Compute Efficiency | Largest Model | Trained Token | Link | |--------------|--------|--------------------|---------------|---------------|------| | (Fin)RWKV | In Production | O ( N ) | 14B | 500B++ (the pile+) | [Paper](https://arxiv.org/abs/2305.13048) | | Ret Net (Microsoft) | Research | O ( N ) | 6.7B | 100B (mixed) | [Paper](https://arxiv.org/abs/2307.08621) | | State Space (Stanford) | Prototype | O ( Log N ) | 355M | 15B (the pile, subset) | [Paper](https://arxiv.org/abs/2302.10866) | | Liquid (MIT) | Research | - | <1M | - | [Paper](https://arxiv.org/abs/2302.10866) | | Transformer Architecture (included for contrasting reference) | In Production | O ( N^2 ) | 800B (est) | 13T++ (est) | - | <img src="https://cdn-uploads.huggingface.co/production/uploads/631ea4247beada30465fa606/7vAOYsXH1vhTyh22o6jYB.png" width="500" alt="Inference computational cost vs. Number of tokens"> _Note: Needs more data and training, testing purposes only._
Jzuluaga/bert-base-speaker-role-atc-en-uwb-atcc
Jzuluaga
2024-02-08T06:22:15Z
32
3
transformers
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "text", "sequence-classification", "en-atc", "en", "generated_from_trainer", "bertraffic", "audio-classification", "dataset:Jzuluaga/uwb_atcc", "arxiv:2211.04054", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
audio-classification
2022-12-05T10:18:56Z
--- language: en license: apache-2.0 tags: - text - sequence-classification - en-atc - en - generated_from_trainer - bert - bertraffic - audio-classification datasets: - Jzuluaga/uwb_atcc metrics: - Precision - Recall - Accuracy - F1 widget: - text: >- csa two nine six startup approved mike current qnh one zero one eight time check one seven - text: >- swiss four eight seven november runway three one cleared for takeoff wind one three zero degrees seven knots - text: >- lufthansa five yankee victor runway one three clear to land wind zero seven zero degrees - text: austrian seven one zulu hello to you reduce one six zero knots - text: >- sky travel one nine two approaching holding point three one ready for departure - name: bert-base-speaker-role-atc-en-uwb-atcc results: - task: type: token-classification name: chunking dataset: type: Jzuluaga/uwb_atcc name: UWB-ATCC corpus (Air Traffic Control Communications) config: test split: test metrics: - type: F1 value: 0.87 name: TEST F1 (macro) verified: false - type: Accuracy value: 0.91 name: TEST Accuracy verified: false - type: Precision value: 0.86 name: TEST Precision (macro) verified: false - type: Recall value: 0.88 name: TEST Recall (macro) verified: false - type: Jaccard Error Rate value: 0.169 name: TEST Jaccard Error Rate verified: false base_model: bert-base-uncased --- # bert-base-speaker-role-atc-en-uwb-atcc This model allow to detect speaker roles based on text. Normally, this task is done on the acoustic level. However, we propose to perform this task on the text level. We solve this challenge by performing speaker role with a BERT model. We fine-tune it on the sequence classification task. For instance: - Utterance 1: **lufthansa six two nine charlie tango report when established** - Utterance 2: **report when established lufthansa six two nine charlie tango** Based on that, could you tell the speaker role? Is it Utterance 1 air traffic controller or pilot? Check the inference API (there are 5 examples)! This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the [UWB-ATCC corpus](https://huggingface.co/datasets/Jzuluaga/uwb_atcc). <a href="https://github.com/idiap/atco2-corpus"> <img alt="GitHub" src="https://img.shields.io/badge/GitHub-Open%20source-green\"> </a> It achieves the following results on the evaluation set: - Loss: 0.6191 - Accuracy: 0.9103 - Precision: 0.9239 - Recall: 0.9161 - F1: 0.9200 **Paper**: [ATCO2 corpus: A Large-Scale Dataset for Research on Automatic Speech Recognition and Natural Language Understanding of Air Traffic Control Communications](https://arxiv.org/abs/2211.04054) Authors: Juan Zuluaga-Gomez, Karel Veselý, Igor Szöke, Petr Motlicek, Martin Kocour, Mickael Rigault, Khalid Choukri, Amrutha Prasad and others Abstract: Personal assistants, automatic speech recognizers and dialogue understanding systems are becoming more critical in our interconnected digital world. A clear example is air traffic control (ATC) communications. ATC aims at guiding aircraft and controlling the airspace in a safe and optimal manner. These voice-based dialogues are carried between an air traffic controller (ATCO) and pilots via very-high frequency radio channels. In order to incorporate these novel technologies into ATC (low-resource domain), large-scale annotated datasets are required to develop the data-driven AI systems. Two examples are automatic speech recognition (ASR) and natural language understanding (NLU). In this paper, we introduce the ATCO2 corpus, a dataset that aims at fostering research on the challenging ATC field, which has lagged behind due to lack of annotated data. The ATCO2 corpus covers 1) data collection and pre-processing, 2) pseudo-annotations of speech data, and 3) extraction of ATC-related named entities. The ATCO2 corpus is split into three subsets. 1) ATCO2-test-set corpus contains 4 hours of ATC speech with manual transcripts and a subset with gold annotations for named-entity recognition (callsign, command, value). 2) The ATCO2-PL-set corpus consists of 5281 hours of unlabeled ATC data enriched with automatic transcripts from an in-domain speech recognizer, contextual information, speaker turn information, signal-to-noise ratio estimate and English language detection score per sample. Both available for purchase through ELDA at this http URL. 3) The ATCO2-test-set-1h corpus is a one-hour subset from the original test set corpus, that we are offering for free at this url: https://www.atco2.org/data. We expect the ATCO2 corpus will foster research on robust ASR and NLU not only in the field of ATC communications but also in the general research community. Code — GitHub repository: https://github.com/idiap/atco2-corpus ## Intended uses & limitations This model was fine-tuned on air traffic control data. We don't expect that it keeps the same performance on some others datasets where BERT was pre-trained or fine-tuned. ## Training and evaluation data See Table 7 (page 19) in our paper: [ATCO2 corpus: A Large-Scale Dataset for Research on Automatic Speech Recognition and Natural Language Understanding of Air Traffic Control Communications](https://arxiv.org/abs/2211.04054). We described there the data used to fine-tune our sequence classification model. - We use the UWB-ATCC corpus to fine-tune this model. You can download the raw data here: https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0001-CCA1-0 - However, do not worry, we have prepared a script in our repository for preparing this databases: - Dataset preparation folder: https://github.com/idiap/atco2-corpus/tree/main/data/databases/uwb_atcc/ - Prepare the data: https://github.com/idiap/atco2-corpus/blob/main/data/databases/uwb_atcc/data_prepare_uwb_atcc_corpus_other.sh ## Writing your own inference script The snippet of code: ```python from transformers import pipeline, AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("Jzuluaga/bert-base-speaker-role-atc-en-uwb-atcc") model = AutoModelForSequenceClassification.from_pretrained("Jzuluaga/bert-base-speaker-role-atc-en-uwb-atcc") ##### Process text sample (from UWB-ATCC) from transformers import pipeline nlp = pipeline('text-classification', model=model, tokenizer=tokenizer) nlp("lining up runway three one csa five bravo") [{'label': 'pilot', 'score': 0.9998971223831177}] ``` # Cite us If you use this code for your research, please cite our paper with: ``` @article{zuluaga2022bertraffic, title={BERTraffic: BERT-based Joint Speaker Role and Speaker Change Detection for Air Traffic Control Communications}, author={Zuluaga-Gomez, Juan and Sarfjoo, Seyyed Saeed and Prasad, Amrutha and others}, journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar}, year={2022} } ``` and, ``` @article{zuluaga2022how, title={How Does Pre-trained Wav2Vec2. 0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications}, author={Zuluaga-Gomez, Juan and Prasad, Amrutha and Nigmatulina, Iuliia and Sarfjoo, Saeed and others}, journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar}, year={2022} } ``` and, ``` @article{zuluaga2022atco2, title={ATCO2 corpus: A Large-Scale Dataset for Research on Automatic Speech Recognition and Natural Language Understanding of Air Traffic Control Communications}, author={Zuluaga-Gomez, Juan and Vesel{\`y}, Karel and Sz{\"o}ke, Igor and Motlicek, Petr and others}, journal={arXiv preprint arXiv:2211.04054}, year={2022} } ``` ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 3000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | No log | 3.36 | 500 | 0.2346 | 0.9207 | 0.9197 | 0.9413 | 0.9303 | | 0.2212 | 6.71 | 1000 | 0.3161 | 0.9046 | 0.9260 | 0.9027 | 0.9142 | | 0.2212 | 10.07 | 1500 | 0.4337 | 0.9065 | 0.9191 | 0.9144 | 0.9167 | | 0.0651 | 13.42 | 2000 | 0.4743 | 0.9178 | 0.9249 | 0.9295 | 0.9272 | | 0.0651 | 16.78 | 2500 | 0.5538 | 0.9103 | 0.9196 | 0.9211 | 0.9204 | | 0.0296 | 20.13 | 3000 | 0.6191 | 0.9103 | 0.9239 | 0.9161 | 0.9200 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0+cu117 - Datasets 2.7.0 - Tokenizers 0.13.2
danaleee/Long_rank10_iter500_valprompt_token
danaleee
2024-02-08T06:13:04Z
4
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:CompVis/stable-diffusion-v1-4", "base_model:adapter:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-02-08T03:48:36Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 instance_prompt: a photo of omd rc_car tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - danaleee/Long_rank10_iter500_valprompt_token These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of omd rc_car using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: False.
umuthopeyildirim/fin-rwkv-1b5
umuthopeyildirim
2024-02-08T06:13:02Z
20
0
transformers
[ "transformers", "pytorch", "safetensors", "rwkv", "text-generation", "finance", "en", "dataset:gbharti/finance-alpaca", "arxiv:2305.13048", "arxiv:2307.08621", "arxiv:2302.10866", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-01-28T12:05:35Z
--- license: apache-2.0 datasets: - gbharti/finance-alpaca language: - en library_name: transformers tags: - finance widget: - text: >- user: Hypothetical, can taxes ever cause a net loss on otherwise-profitable stocks? bot: example_title: Hypothetical - text: >- user: What are some signs that the stock market might crash? bot: example_title: Question 2 - text: >- user: Where should I be investing my money? bot: example_title: Question - text: >- user: Is this headline positive or negative? Headline: Australian Tycoon Forrest Shuts Nickel Mines After Prices Crash. bot: example_title: Sentiment analysis - text: >- user: Aluminum price per KG is 50$. Forecast max: +1$ min:+0.3$. What should be the current price of aluminum? bot: example_title: Forecast --- # Fin-RWKV: Attention Free Financal Expert (WIP) Fin-RWKV is a cutting-edge, attention-free model designed specifically for financial analysis and prediction. Developed as part of a MindsDB Hackathon, this model leverages the simplicity and efficiency of the RWKV architecture to process financial data, providing insights and forecasts with remarkable accuracy. Fin-RWKV is tailored for professionals and enthusiasts in the finance sector who seek to integrate advanced deep learning techniques into their financial analyses. ## Use Cases - Sentiment analysis - Forecast - Product Pricing ## Features - Attention-Free Architecture: Utilizes the RWKV (Recurrent Weighted Kernel-based) model, which bypasses the complexity of attention mechanisms while maintaining high performance. - Lower Costs: 10x to over a 100x+ lower inference cost, 2x to 10x lower training cost - Tinyyyy: Lightweight enough to run on CPUs in real-time bypassing the GPU - and is able to run on your laptop today - Finance-Specific Training: Trained on the gbharti/finance-alpaca dataset, ensuring that the model is finely tuned for financial data analysis. - Transformers Library Integration: Built on the popular 'transformers' library, ensuring easy integration with existing ML pipelines and applications. ## How to use ```py from transformers import AutoTokenizer, AutoModelForCausalLM, StoppingCriteria, StoppingCriteriaList, TextIteratorStreamer from threading import Thread import torch tokenizer = AutoTokenizer.from_pretrained("umuthopeyildirim/fin-rwkv-1b5") model = AutoModelForCausalLM.from_pretrained("umuthopeyildirim/fin-rwkv-1b5") prompt = "user: Is this headline positive or negative? Headline: Australian Tycoon Forrest Shuts Nickel Mines After Prices Crash\nbot:" # Tokenize the input input_ids = tokenizer.encode(prompt, return_tensors="pt") # Generate a response output = model.generate(input_ids, max_length=333, num_return_sequences=1) # Decode the output generated_text = tokenizer.decode(output[0], skip_special_tokens=True) print(generated_text) ``` ## Competing Against | Name | Param Count | Cost | Inference Cost | |---------------|-------------|------|----------------| | Fin-RWKV | 1B5 | $3 | Free on HuggingFace 🤗 & Low-End CPU | | [BloombergGPT](https://www.bloomberg.com/company/press/bloomberggpt-50-billion-parameter-llm-tuned-finance/) | 50 Billion | $1.3 million | Enterprise GPUs | | [FinGPT](https://huggingface.co/FinGPT) | 7 Bilion | $302.4 | Consumer GPUs | | Architecture | Status | Compute Efficiency | Largest Model | Trained Token | Link | |--------------|--------|--------------------|---------------|---------------|------| | (Fin)RWKV | In Production | O ( N ) | 14B | 500B++ (the pile+) | [Paper](https://arxiv.org/abs/2305.13048) | | Ret Net (Microsoft) | Research | O ( N ) | 6.7B | 100B (mixed) | [Paper](https://arxiv.org/abs/2307.08621) | | State Space (Stanford) | Prototype | O ( Log N ) | 355M | 15B (the pile, subset) | [Paper](https://arxiv.org/abs/2302.10866) | | Liquid (MIT) | Research | - | <1M | - | [Paper](https://arxiv.org/abs/2302.10866) | | Transformer Architecture (included for contrasting reference) | In Production | O ( N^2 ) | 800B (est) | 13T++ (est) | - | <img src="https://cdn-uploads.huggingface.co/production/uploads/631ea4247beada30465fa606/7vAOYsXH1vhTyh22o6jYB.png" width="500" alt="Inference computational cost vs. Number of tokens"> ## Stats for nerds ### Training Config - n_epoch: 100 - epoch_save_frequency: 10 - batch_size: 5 - ctx_len: 2000 - T_MAX: 384 - RWKV_FLOAT_MODE: fp16 - RWKV_DEEPSPEED: 0 ### Loss <img src="https://cdn-uploads.huggingface.co/production/uploads/631ea4247beada30465fa606/NvPKCBlbVhiVeeMpUAv2C.png" width="500" alt="Loss"> _Note: Needs more data and training, testing purposes only. Not recomended for production level deployment._ [Presentation](https://docs.google.com/presentation/d/1vNQ8Y5wwR0WXlO60fsXjkru5R9I0ZgykTmgag0B3Ato/edit?usp=sharing)
TesterGG/act_classifier
TesterGG
2024-02-08T05:57:42Z
46
0
transformers
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-08T05:28:51Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_keras_callback model-index: - name: TesterGG/act_classifier results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # TesterGG/act_classifier This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3861 - Validation Loss: 0.5086 - Train Accuracy: 0.8073 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 9080, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.5062 | 0.5242 | 0.7969 | 0 | | 0.4096 | 0.5086 | 0.8073 | 1 | | 0.3861 | 0.5086 | 0.8073 | 2 | ### Framework versions - Transformers 4.38.0.dev0 - TensorFlow 2.15.0 - Datasets 2.16.1 - Tokenizers 0.15.1
joowon99/SOLAR-10.7B-ko_alpaca
joowon99
2024-02-08T05:50:48Z
2
0
peft
[ "peft", "safetensors", "llama", "llama-factory", "lora", "generated_from_trainer", "pytorch", "base_model:upstage/SOLAR-10.7B-Instruct-v1.0", "base_model:adapter:upstage/SOLAR-10.7B-Instruct-v1.0", "license:cc-by-4.0", "region:us" ]
null
2024-02-07T05:01:55Z
--- license: cc-by-4.0 library_name: peft tags: - llama-factory - lora - generated_from_trainer - pytorch base_model: upstage/SOLAR-10.7B-Instruct-v1.0 model-index: - name: solar_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # solar_model This model is a fine-tuned version of [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0) on the ko_alpaca_style_dataset dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 1.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.7.1 - Transformers 4.37.1 - Pytorch 2.0.1+cu117 - Datasets 2.16.1 - Tokenizers 0.15.1
humung/polyglot-ko-12.8b-vlending-v0.4
humung
2024-02-08T05:36:25Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-08T05:36:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hotsuyuki/gpt_0.125B_global_step4000_openassistant
hotsuyuki
2024-02-08T05:23:01Z
89
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-08T05:22:30Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
yoon1000/TrOCR_0208-1
yoon1000
2024-02-08T05:13:51Z
32
0
transformers
[ "transformers", "tensorboard", "safetensors", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "base_model:microsoft/trocr-base-stage1", "base_model:finetune:microsoft/trocr-base-stage1", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-02-08T05:10:40Z
--- base_model: microsoft/trocr-base-stage1 tags: - generated_from_trainer model-index: - name: TrOCR_0208-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # TrOCR_0208-1 This model is a fine-tuned version of [microsoft/trocr-base-stage1](https://huggingface.co/microsoft/trocr-base-stage1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8777 - Cer: 0.0931 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.5571 | 0.68 | 200 | 1.6487 | 0.2024 | | 0.9405 | 1.37 | 400 | 1.2816 | 0.1666 | | 0.6927 | 2.05 | 600 | 1.0319 | 0.1199 | | 1.0794 | 2.74 | 800 | 0.8777 | 0.0931 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.1+cu121 - Datasets 2.13.0 - Tokenizers 0.15.0
GobinathR/language-training
GobinathR
2024-02-08T05:08:28Z
0
0
keras
[ "keras", "code", "text2text-generation", "en", "ta", "dataset:HuggingFaceM4/WebSight", "region:us" ]
text2text-generation
2024-02-07T04:24:40Z
--- dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 2272 num_examples: 8 download_size: 3903 dataset_size: 2272 configs: - config_name: default data_files: - split: train path: data/train-* datasets: - HuggingFaceM4/WebSight language: - en - ta metrics: - character library_name: keras pipeline_tag: text2text-generation tags: - code ---
wladimir/Reinforce-Pixelcopter-PLE-v0
wladimir
2024-02-08T05:04:17Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-02-08T05:04:13Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 18.00 +/- 15.89 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
SagarKeshave/wizard_math_
SagarKeshave
2024-02-08T04:50:47Z
1
0
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "en", "arxiv:2304.12244", "arxiv:2306.08568", "arxiv:2308.09583", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-08T04:50:47Z
--- inference: false language: - en pipeline_tag: text-generation --- ## WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct (RLEIF) <p style="font-size:28px;" align="center"> 🏠 <a href="https://wizardlm.github.io/" target="_blank">Home Page</a> </p> <p align="center"> <p align="center"> 🤗 <a href="https://huggingface.co/WizardLM" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/nlpxucan/WizardLM" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> </p> <p align="center"> 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br> </p> <p align="center"> 👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a> </p> ## News [12/19/2023] 🔥 We released **WizardMath-7B-V1.1** trained from Mistral-7B, the **SOTA 7B math LLM**, achieves **83.2 pass@1** on GSM8k, and **33.0 pass@1** on MATH. Use this [[**Demo**](http://47.103.63.15:50083/)] to chat with it. [12/19/2023] 🔥 **WizardMath-7B-V1.1** outperforms **ChatGPT 3.5**, **Gemini Pro**, **Mixtral MOE**, and **Claude Instant** on GSM8K pass@1. [12/19/2023] 🔥 **WizardMath-7B-V1.1** is comparable with **ChatGPT 3.5**, **Gemini Pro**, and surpasses **Mixtral MOE** on MATH pass@1. | Model | Checkpoint | Paper | GSM8k | MATH | Demo| | ----- |------| ---- |------|-------|-------| | **WizardMath-7B-V1.1** | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.1" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **83.2** | **33.0** |[[**Demo**](http://47.103.63.15:50083/)] | | WizardMath-70B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-70B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **81.6** | **22.7** || | WizardMath-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **63.9** | **14.0** || | WizardMath-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **54.9** | **10.7** | | ## [12/19/2023] Comparing WizardMath-7B-V1.1 with other open source 7B size math LLMs. | Model | GSM8k Pass@1 | MATH Pass@1 | | ----- |------| ---- | | MPT-7B | 6.8 | 3.0 | |Llama 1-7B | 11.0 | 2.9 | |Llama 2-7B|12.3 |2.8 | |Yi-6b| 32.6 |5.8 | |Mistral-7B|37.8 |9.1 | |Qwen-7b|47.8 |9.3 | | RFT-7B | 50.3 | -- | | MAmmoTH-7B (COT) | 50.5 | 10.4 | | WizardMath-7B-V1.0 | 54.9 | 10.7 | |Abel-7B-001 |59.7 |13 | | MetaMath-7B | 66.5 | 19.8 | | Arithmo-Mistral-7B | 74.7 | 25.3 | |MetaMath-Mistral-7B|77.7 |28.2 | |Abel-7B-002 | 80.4 | 29.5 | | **WizardMath-7B-V1.1** | **83.2** | **33.0** | ## [12/19/2023] Comparing WizardMath-7B-V1.1 with large open source (30B~70B) LLMs. | Model | GSM8k Pass@1 | MATH Pass@1 | | ----- |------| ---- | | Llemma-34B | 51.5 | 25.0 | | Minerva-62B | 52.4 | 27.6 | | Llama 2-70B | 56.8 | 13.5 | | DeepSeek 67B | 63.4 | -- | | Gork 33B | 62.9 | 23.9 | | MAmmoTH-70B | 72.4 | 21.1 | | Yi-34B | 67.9 | 15.9 | | Mixtral 8x7B | 74.4 | 28.4 | | MetaMath-70B | 82.3 | 26.6 | | **WizardMath-7B-V1.1** | **83.2** | **33.0** | ## ❗ Data Contamination Check: Before model training, we carefully and rigorously checked all the training data, and used multiple deduplication methods to verify and prevent data leakage on GSM8k and MATH test set. 🔥 ❗<b>Note for model system prompts usage:</b> Please use **the same systems prompts strictly** with us, and we do not guarantee the accuracy of the **quantified versions**. **Default version:** ``` "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:" ``` **CoT Version:** (❗For the **simple** math questions, we do NOT recommend to use the CoT prompt.) ``` "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response: Let's think step by step." ``` ## Inference WizardMath Demo Script We provide the WizardMath inference demo code [here](https://github.com/nlpxucan/WizardLM/tree/main/demo). ## Citation Please cite the repo if you use the data, method or code in this repo. ``` @article{luo2023wizardmath, title={WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct}, author={Luo, Haipeng and Sun, Qingfeng and Xu, Can and Zhao, Pu and Lou, Jianguang and Tao, Chongyang and Geng, Xiubo and Lin, Qingwei and Chen, Shifeng and Zhang, Dongmei}, journal={arXiv preprint arXiv:2308.09583}, year={2023} } ```
nightdude/kanji-lora-conv
nightdude
2024-02-08T04:40:09Z
1
1
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:CompVis/stable-diffusion-v1-4", "base_model:adapter:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-02-08T03:37:14Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - nightdude/kanji-lora-conv These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were fine-tuned on the nightdude/sakana-kanji dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
zhan1993/phi2_flan_10_random_unbalanced-epoch_0
zhan1993
2024-02-08T04:28:58Z
0
0
null
[ "region:us" ]
null
2024-01-31T11:45:24Z
Number of experts present in the library: 10 | Expert Name | Base Model | Trained on | Adapter Type | | --- | --- | --- | --- | | cluster_3 | phi-2 | sordonia/flan-10k-flat/sciq_Direct_Question_Closed_Book_,dbpedia_14_given_list_what_category_does_the_paragraph_belong_to,race_high_Write_a_multi_choice_question_for_the_following_article,kilt_tasks_hotpotqa_complex_question,quoref_Answer_Test,adversarial_qa_dbidaf_based_on,duorc_ParaphraseRC_title_generation,cot_strategyqa,sciq_Direct_Question,adversarial_qa_dbert_answer_the_following_q,quartz_paragraph_question_plain_concat,wiki_hop_original_generate_subject,race_middle_Read_the_article_and_answer_the_question_no_option_,adversarial_qa_dbert_tell_what_it_is,cos_e_v1_11_aligned_with_common_sense,anli_r2_0_1_0,cot_gsm8k,qasc_qa_with_separated_facts_1,wiqa_effect_with_string_answer,wiki_bio_what_content,cot_qasc,gem_dart_1_1_0,natural_questions_open_1_0_0,race_middle_Write_a_multi_choice_question_for_the_following_article,wiki_qa_Topic_Prediction_Answer_Only,dream_generate_first_utterance,dream_read_the_following_conversation_and_answer_the_question,ropes_prompt_mix,wmt16_translate_ro_en_1_0_0,gem_wiki_lingua_english_en_1_1_0,social_i_qa_Generate_answer,cot_gsm8k_ii,stream_aqua_ii,quoref_Context_Contains_Answer,quail_context_question_description_text,ropes_prompt_beginning,drop_2_0_0 | lora | | cluster_6 | phi-2 | sordonia/flan-10k-flat/stream_aqua,anli_r3_0_1_0,quail_context_question_description_answer_id,wiki_hop_original_choose_best_object_interrogative_1,true_case,wmt16_translate_tr_en_1_0_0,qasc_is_correct_1,ropes_prompt_bottom_hint_beginning,quarel_heres_a_story,wiki_hop_original_explain_relation,adversarial_qa_droberta_answer_the_following_q,lambada_1_0_0,squad_v2_0_3_0_0,wiqa_effect_with_label_answer,cos_e_v1_11_i_think,quoref_Guess_Answer,cos_e_v1_11_question_description_option_text,wiki_qa_found_on_google,duorc_SelfRC_build_story_around_qa,quartz_read_passage_below_choose,qasc_qa_with_separated_facts_2,wiqa_what_might_be_the_last_step_of_the_process,multi_news_1_0_0,quoref_Read_And_Extract_,adversarial_qa_dbert_question_context_answer,app_reviews_categorize_rating_using_review,qasc_is_correct_2 | lora | | cluster_5 | phi-2 | sordonia/flan-10k-flat/social_i_qa_Generate_the_question_from_the_answer,duorc_SelfRC_answer_question,wiki_qa_Is_This_True_,cos_e_v1_11_explain_why_human,race_middle_Write_a_multi_choice_question_options_given_,dbpedia_14_pick_one_category_for_the_following_text,quartz_answer_question_based_on,fix_punct,squad_v1_1_3_0_0,sciq_Multiple_Choice_Question_First,quoref_Given_Context_Answer_Question,super_glue_copa_1_0_2,cnn_dailymail_3_4_0,race_middle_Is_this_the_right_answer,quail_context_description_question_answer_text,race_high_Read_the_article_and_answer_the_question_no_option_,duorc_ParaphraseRC_generate_question_by_answer,imdb_reviews_plain_text_1_0_0,quartz_use_info_from_question_paragraph,ropes_plain_bottom_hint,quarel_choose_between,glue_sst2_2_0_0,adversarial_qa_dbert_based_on,wmt16_translate_de_en_1_0_0 | lora | | cluster_4 | phi-2 | sordonia/flan-10k-flat/cos_e_v1_11_description_question_option_id,duorc_ParaphraseRC_movie_director,super_glue_cb_1_0_2,wiqa_what_is_the_final_step_of_the_following_process,cot_creak,glue_mnli_2_0_0,wiki_qa_Topic_Prediction_Question_Only,quarel_logic_test,ropes_plain_no_background,gem_e2e_nlg_1_1_0,wiqa_does_the_supposed_perturbation_have_an_effect,cot_ecqa,quarel_testing_students,wiki_bio_comprehension,wmt14_translate_fr_en_1_0_0,cos_e_v1_11_description_question_option_text,social_i_qa_Check_if_a_random_answer_is_valid_or_not,duorc_ParaphraseRC_decide_worth_it | lora | | cluster_8 | phi-2 | sordonia/flan-10k-flat/quartz_given_the_fact_answer_the_q,dream_generate_last_utterance,web_questions_question_answer,quoref_What_Is_The_Answer,stream_qed_ii,cos_e_v1_11_question_description_option_id,adversarial_qa_droberta_based_on,para_crawl_enes,glue_qqp_2_0_0,cos_e_v1_11_generate_explanation_given_text,glue_mrpc_2_0_0,duorc_ParaphraseRC_answer_question | lora | | cluster_2 | phi-2 | sordonia/flan-10k-flat/ropes_read_background_situation,yelp_polarity_reviews_0_2_0,kilt_tasks_hotpotqa_straighforward_qa,qasc_qa_with_combined_facts_1,snli_1_1_0,wiki_hop_original_choose_best_object_affirmative_2,cot_strategyqa_ii,gem_common_gen_1_1_0,race_middle_Select_the_best_answer_no_instructions_,quoref_Find_Answer,trec_1_0_0,duorc_SelfRC_question_answering,race_middle_Taking_a_test,ropes_plain_background_situation,super_glue_record_1_0_2,ropes_background_new_situation_answer,cos_e_v1_11_rationale,web_questions_get_the_answer,quail_no_prompt_id,quoref_Answer_Question_Given_Context,duorc_SelfRC_movie_director,app_reviews_convert_to_star_rating,duorc_SelfRC_decide_worth_it,stream_qed | lora | | cluster_1 | phi-2 | sordonia/flan-10k-flat/anli_r1_0_1_0,dream_answer_to_dialogue,wiki_bio_guess_person,web_questions_potential_correct_answer,ropes_new_situation_background_answer,duorc_SelfRC_title_generation,quartz_use_info_from_paragraph_question,quartz_having_read_above_passage,super_glue_wic_1_0_2,huggingface_xsum,cot_ecqa_ii,cos_e_v1_11_question_option_description_id,race_middle_Select_the_best_answer,kilt_tasks_hotpotqa_combining_facts,cot_creak_ii,race_high_Select_the_best_answer_no_instructions_,sciq_Multiple_Choice | lora | | cluster_7 | phi-2 | sordonia/flan-10k-flat/race_high_Select_the_best_answer,adversarial_qa_dbidaf_generate_question,math_dataset_algebra__linear_1d_1_0_0,race_high_Is_this_the_right_answer,adversarial_qa_droberta_generate_question,wiqa_which_of_the_following_is_the_supposed_perturbation,adversarial_qa_droberta_question_context_answer,ag_news_subset_1_0_0,adversarial_qa_droberta_tell_what_it_is,wiki_qa_automatic_system,quail_description_context_question_answer_text,web_questions_whats_the_answer,gem_web_nlg_en_1_1_0,ropes_given_background_situation,ropes_background_situation_middle,quartz_answer_question_below,duorc_ParaphraseRC_extract_answer,social_i_qa_I_was_wondering,wiki_hop_original_choose_best_object_affirmative_1,cos_e_v1_11_question_option_description_text,coqa_1_0_0,qasc_qa_with_separated_facts_4,dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to,app_reviews_generate_review,wiki_qa_Direct_Answer_to_Question,race_middle_Select_the_best_answer_generate_span_,race_high_Taking_a_test,glue_stsb_2_0_0,wiki_qa_Decide_good_answer,super_glue_wsc_fixed_1_0_2,social_i_qa_Show_choices_and_generate_answer,adversarial_qa_dbidaf_question_context_answer,cot_esnli,cot_esnli_ii,wiki_qa_Jeopardy_style,quoref_Answer_Friend_Question | lora | | cluster_9 | phi-2 | sordonia/flan-10k-flat/ropes_prompt_bottom_no_hint,qasc_qa_with_separated_facts_5,quail_context_description_question_answer_id,wmt16_translate_fi_en_1_0_0,wiki_hop_original_generate_object,quoref_Guess_Title_For_Context,qasc_qa_with_separated_facts_3,wiki_qa_Generate_Question_from_Topic,duorc_ParaphraseRC_question_answering,social_i_qa_Show_choices_and_generate_index,quac_1_0_0,duorc_SelfRC_generate_question,kilt_tasks_hotpotqa_formulate,definite_pronoun_resolution_1_1_0,adversarial_qa_dbidaf_answer_the_following_q,dbpedia_14_given_a_choice_of_categories_,gigaword_1_2_0,race_high_Write_a_multi_choice_question_options_given_,quoref_Found_Context_Online,quail_context_question_description_answer_text,wiki_bio_key_content,quail_context_description_question_text,quail_context_question_answer_description_id | lora | | cluster_10 | phi-2 | sordonia/flan-10k-flat/wiki_hop_original_generate_subject_and_object,aeslc_1_0_0,sciq_Multiple_Choice_Closed_Book_,wiki_qa_exercise,duorc_ParaphraseRC_build_story_around_qa,glue_wnli_2_0_0,wiki_hop_original_choose_best_object_interrogative_2,trivia_qa_rc_1_1_0,duorc_SelfRC_extract_answer,cot_sensemaking_ii,quail_description_context_question_text,quail_no_prompt_text,duorc_ParaphraseRC_generate_question,unified_qa_science_inst,app_reviews_convert_to_rating,glue_qnli_2_0_0,adversarial_qa_dbert_generate_question,wiki_bio_who,quail_description_context_question_answer_id,wiqa_what_might_be_the_first_step_of_the_process,dream_baseline,web_questions_short_general_knowledge_q,quarel_do_not_use,glue_cola_2_0_0,word_segment,race_high_Select_the_best_answer_generate_span_,duorc_SelfRC_generate_question_by_answer,cot_sensemaking,wiki_hop_original_choose_best_object_affirmative_3,super_glue_multirc_1_0_2,adversarial_qa_dbidaf_tell_what_it_is,paws_wiki_1_1_0,wiqa_what_is_the_missing_first_step,super_glue_rte_1_0_2,kilt_tasks_hotpotqa_final_exam,wiki_qa_Topic_Prediction_Question_and_Answer_Pair,cosmos_qa_1_0_0,quail_context_question_answer_description_text | lora | Last updated on: 2024-02-01 08:00:01+00:00
LoneStriker/Everyone-Coder-33b-v2-Base-5.0bpw-h6-exl2
LoneStriker
2024-02-08T04:28:02Z
3
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-08T04:19:26Z
--- license: other license_name: deepseek license_link: https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL tags: - merge --- Everyone-Coder-33b-v2-Base ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/ECrHQnZnv8UM9GUCQtlWW.jpeg) EveryoneLLM series of models made by the community, for the community. This is a coding specific model made using fine-tunes of deekseekcoder-33b-base. This Version 2 of the Everything-Coder-33b model uses the task_arithmetic merging method which has major increases in coding performance as opposed to the ties method. You should find this version having much better coding performance than Version 1, without any of the negative that merging has on the integrity of the model. Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` The models that were used in this merger were as follow: - https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct - https://huggingface.co/codefuse-ai/CodeFuse-DeepSeek-33B - https://huggingface.co/WizardLM/WizardCoder-33B-V1.1 Thank you to the creators of the above ai models, they have full credit for the EveryoneLLM series of models. Without their hard work we wouldnt be able to achieve the great success we have in the open source community. 💗 You can find the write up for merging models here: https://docs.google.com/document/d/1_vOftBnrk9NRk5h10UqrfJ5CDih9KBKL61yvrZtVWPE/edit?usp=sharing Config for the merger can be found bellow: ```yaml models: - model: codefuse-ai_CodeFuse-DeepSeek-33B parameters: weight: 1 - model: deepseek-ai_deepseek-coder-33b-instruct parameters: weight: 1 - model: WizardLM_WizardCoder-33B-V1.1 parameters: weight: 1 merge_method: task_arithmetic base_model: deepseek-ai_deepseek-coder-33b-base parameters: normalize: true int8_mask: true dtype: float16 ```
LoneStriker/Everyone-Coder-33b-v2-Base-4.0bpw-h6-exl2
LoneStriker
2024-02-08T04:11:23Z
3
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-08T04:04:28Z
--- license: other license_name: deepseek license_link: https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL tags: - merge --- Everyone-Coder-33b-v2-Base ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/ECrHQnZnv8UM9GUCQtlWW.jpeg) EveryoneLLM series of models made by the community, for the community. This is a coding specific model made using fine-tunes of deekseekcoder-33b-base. This Version 2 of the Everything-Coder-33b model uses the task_arithmetic merging method which has major increases in coding performance as opposed to the ties method. You should find this version having much better coding performance than Version 1, without any of the negative that merging has on the integrity of the model. Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` The models that were used in this merger were as follow: - https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct - https://huggingface.co/codefuse-ai/CodeFuse-DeepSeek-33B - https://huggingface.co/WizardLM/WizardCoder-33B-V1.1 Thank you to the creators of the above ai models, they have full credit for the EveryoneLLM series of models. Without their hard work we wouldnt be able to achieve the great success we have in the open source community. 💗 You can find the write up for merging models here: https://docs.google.com/document/d/1_vOftBnrk9NRk5h10UqrfJ5CDih9KBKL61yvrZtVWPE/edit?usp=sharing Config for the merger can be found bellow: ```yaml models: - model: codefuse-ai_CodeFuse-DeepSeek-33B parameters: weight: 1 - model: deepseek-ai_deepseek-coder-33b-instruct parameters: weight: 1 - model: WizardLM_WizardCoder-33B-V1.1 parameters: weight: 1 merge_method: task_arithmetic base_model: deepseek-ai_deepseek-coder-33b-base parameters: normalize: true int8_mask: true dtype: float16 ```
bdpc/SciBERT_twowayloss_25K_bs64
bdpc
2024-02-08T03:57:34Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "base_model:allenai/scibert_scivocab_uncased", "base_model:finetune:allenai/scibert_scivocab_uncased", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-06T14:37:28Z
--- base_model: allenai/scibert_scivocab_uncased tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: SciBERT_TwoWayLoss_25K_bs64 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SciBERT_TwoWayLoss_25K_bs64 This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 5.7117 - Accuracy: 0.7367 - Precision: 0.0357 - Recall: 0.9994 - F1: 0.0689 - Hamming: 0.2633 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 192 - eval_batch_size: 192 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 25000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Hamming | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:| | 6.7538 | 0.47 | 5000 | 6.4722 | 0.7208 | 0.0337 | 0.9987 | 0.0652 | 0.2792 | | 6.1625 | 0.95 | 10000 | 6.0293 | 0.7311 | 0.0350 | 0.9991 | 0.0676 | 0.2689 | | 5.7863 | 1.42 | 15000 | 5.8415 | 0.7362 | 0.0356 | 0.9992 | 0.0688 | 0.2638 | | 5.6995 | 1.9 | 20000 | 5.7343 | 0.7366 | 0.0357 | 0.9994 | 0.0689 | 0.2634 | | 5.4711 | 2.37 | 25000 | 5.7117 | 0.7367 | 0.0357 | 0.9994 | 0.0689 | 0.2633 | ### Framework versions - Transformers 4.35.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.7.1 - Tokenizers 0.14.1
hivaze/ru-e5-base
hivaze
2024-02-08T03:52:59Z
218
3
transformers
[ "transformers", "safetensors", "xlm-roberta", "feature-extraction", "ru", "uk", "kk", "be", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-02-01T09:44:12Z
--- library_name: transformers language: - ru - uk - kk - be --- ## About model creation This is a smaller version of the **intfloat/multilingual-e5-base** with only some Russian (Cyrillic in general) and English (fever) tokens (and embeddings) left. The model created in a similar way as described in this https://medium.com/m/global-identity-2?redirectUrl=https%3A%2F%2Ftowardsdatascience.com%2Fhow-to-adapt-a-multilingual-t5-model-for-a-single-language-b9f94f3d9c90 post. The **CulturaX** dataset was used to search for the required tokens. As a result, out of 250k tokens of the original model, only **69,382** required were left. ## Was the model trained in any way? No. The tokenizer has been modified, and all changes to token identifiers have been corrected by moving embeddings in the model word_embeddings module to their new places, so **the quality of this model** on Cyrilic (and English) **is exactly the same** as the original one. ## Why do we need this? This allows you to use significantly less memory during training and also greatly reduces the weight of the model. ## Authors - Sergei Bratchikov (https://t.me/nlpwanderer)
andysalerno/rainbowfish-v7-lora-adapter
andysalerno
2024-02-08T03:47:54Z
0
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:andysalerno/mistral-sft-v3", "base_model:adapter:andysalerno/mistral-sft-v3", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2024-02-07T05:23:23Z
--- license: apache-2.0 library_name: peft tags: - axolotl - generated_from_trainer base_model: andysalerno/mistral-sft-v3 model-index: - name: rainbowfish-v7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: andysalerno/mistral-sft-v3 model_type: AutoModelForCausalLM load_in_8bit: true load_in_4bit: false strict: false datasets: - path: andysalerno/rainbowfish-v1 type: system_prompt: "" field_system: system field_instruction: input field_output: output format: "{instruction}" no_input_format: "{instruction}" dataset_prepared_path: last_run_prepared val_set_size: 0.005 output_dir: ./lora-out-rainbow7 adapter: lora lora_model_dir: sequence_len: 2048 sample_packing: false # was true eval_sample_packing: false pad_to_sequence_len: false padding_side: left lora_r: 64 lora_alpha: 16 lora_dropout: 0.05 lora_target_linear: true lora_fan_in_fan_out: lora_target_modules: - gate_proj - down_proj - up_proj - q_proj - v_proj - k_proj - o_proj lora_modules_to_save: - embed_tokens - lm_head wandb_project: axolotl wandb_entity: wandb_watch: wandb_name: wandb_log_model: gradient_accumulation_steps: 4 micro_batch_size: 4 optimizer: paged_adamw_8bit lr_scheduler: cosine learning_rate: 2e-5 train_on_inputs: false group_by_length: false bf16: true fp16: tf32: false gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false # early_stopping_patience: 3 local_rank: logging_steps: 1 xformers_attention: flash_attention: true loss_watchdog_threshold: 5.0 loss_watchdog_patience: 3 hub_strategy: "every_save" hub_model_id: andysalerno/rainbowfish-v7 num_epochs: 2 warmup_steps: 100 # warmup_ratio: 0.1 eval_steps: 200 eval_table_size: eval_table_max_new_tokens: 128 # save_steps: 5 # max_steps: 400 saves_per_epoch: 2 debug: weight_decay: 0.1 fsdp: fsdp_config: special_tokens: bos_token: "<|im_start|>" eos_token: "<|im_end|>" unk_token: "<unk>" ``` </details><br> # rainbowfish-v7 This model is a fine-tuned version of [andysalerno/mistral-sft-v3](https://huggingface.co/andysalerno/mistral-sft-v3) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6464 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.6514 | 0.18 | 200 | 0.6828 | | 0.6875 | 0.37 | 400 | 0.6691 | | 0.6626 | 0.55 | 600 | 0.6625 | | 0.688 | 0.74 | 800 | 0.6558 | | 0.7143 | 0.92 | 1000 | 0.6520 | | 0.5243 | 1.11 | 1200 | 0.6495 | | 0.6205 | 1.29 | 1400 | 0.6482 | | 0.6159 | 1.47 | 1600 | 0.6469 | | 0.6287 | 1.66 | 1800 | 0.6465 | | 0.6606 | 1.84 | 2000 | 0.6464 | ### Framework versions - PEFT 0.8.2 - Transformers 4.38.0.dev0 - Pytorch 2.1.2+cu118 - Datasets 2.16.1 - Tokenizers 0.15.0
ek826/LlamaGuard-7b-4.0bpw-exl2
ek826
2024-02-08T03:40:51Z
3
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:2307.09288", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-08T03:28:16Z
## Model Details Original LlamaGuard 7b model can be found [here](https://huggingface.co/meta-llama/LlamaGuard-7b) Llama-Guard is a 7B parameter [Llama 2](https://arxiv.org/abs/2307.09288)-based input-output safeguard model. It can be used for classifying content in both LLM inputs (prompt classification) and in LLM responses (response classification). It acts as an LLM: it generates text in its output that indicates whether a given prompt or response is safe/unsafe, and if unsafe based on a policy, it also lists the violating subcategories. Here is an example: ![](Llama-Guard_example.png) These are exl2 4.0bpw quantized weights. Original 7B model performs on binary classification of 2k toxic chat test examples Precision: 0.9, Recall: 0.277, F1 Score: 0.424 4.0bpw performs Precision: 0.92, Recall: 0.246, F1 Score: 0.389
Chattiori/AnyOrangeMix
Chattiori
2024-02-08T03:23:51Z
18
4
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "en", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-03-19T10:33:30Z
--- license: creativeml-openrail-m language: - en tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers --- # **AnyOrangeMix** _____ AnyOrangeMix is merge model of Anything v4.5 and AbyssOrangeMix3. CivitAI: https://civitai.com/models/21503/anyorangemix-anything-v45-abyssorangemix-3 # Merge Source: Anything v4.5 (0.5) + AbyssOrangeMix 3A1B (0.5) Weighted Sum # Recommended Settings: * Sampler: “DPM++ SDE Karras” recommended. * Steps: 20~ * Clipskip: 1 or 2 * CFG Scale: 7 or higher recommended. * VAE: anything_v4.0.vae.pt # Recommended Prompt: Prompt : masterpiece, best quality, Negative : lowres,bad anatomy,bad hands,text,error,missing fingers,extra digit,fewer digits,cropped,worst quality,low quality,normal quality,jpeg artifacts,blurry,extra legs,extra feet,extra arms,extra fingers,missing legs,missing arms,ugly ,huge breasts,monochrome # Recommended Embeds: * bad prompt * bad hands * bad artist * Easy Negative
Smoorf2022/TiniKatia
Smoorf2022
2024-02-08T03:09:57Z
0
0
null
[ "dataset:HuggingFaceM4/WebSight", "license:cc", "region:us" ]
null
2024-02-08T03:01:38Z
--- license: cc datasets: - HuggingFaceM4/WebSight metrics: - character ---
neozhang2003/ppo-LunarLander-v2
neozhang2003
2024-02-08T03:01:48Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-02-08T03:01:30Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 248.89 +/- 29.23 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
CLMBR/existential-there-quantifier-transformer-0
CLMBR
2024-02-08T03:01:26Z
1
0
transformers
[ "transformers", "pytorch", "opt", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-02T09:49:57Z
--- tags: - generated_from_trainer model-index: - name: existential-there-quantifier-transformer-0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # existential-there-quantifier-transformer-0 This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.8606 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 3052726 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-------:|:---------------:| | 4.226 | 0.03 | 76320 | 4.1970 | | 4.0196 | 1.03 | 152640 | 4.0272 | | 3.9103 | 0.03 | 228960 | 3.9532 | | 3.842 | 1.03 | 305280 | 3.9117 | | 3.7902 | 0.03 | 381600 | 3.8860 | | 3.7496 | 1.03 | 457920 | 3.8709 | | 3.7142 | 0.03 | 534240 | 3.8605 | | 3.6843 | 1.03 | 610560 | 3.8533 | | 3.6562 | 0.03 | 686880 | 3.8494 | | 3.6294 | 1.03 | 763200 | 3.8464 | | 3.6054 | 0.03 | 839520 | 3.8448 | | 3.5872 | 1.03 | 915840 | 3.8442 | | 3.5719 | 0.03 | 992160 | 3.8433 | | 3.5494 | 1.03 | 1068480 | 3.8438 | | 3.5361 | 0.03 | 1144800 | 3.8453 | | 3.5229 | 1.03 | 1221120 | 3.8448 | | 3.5091 | 0.03 | 1297440 | 3.8469 | | 3.4962 | 0.03 | 1373760 | 3.8474 | | 3.4817 | 0.03 | 1450080 | 3.8502 | | 3.4739 | 1.03 | 1526400 | 3.8508 | | 3.4641 | 0.03 | 1602720 | 3.8521 | | 3.455 | 1.03 | 1679040 | 3.8532 | | 3.4471 | 0.03 | 1755360 | 3.8544 | | 3.4338 | 1.03 | 1831680 | 3.8554 | | 3.4207 | 0.03 | 1908000 | 3.8572 | | 3.4107 | 1.03 | 1984320 | 3.8577 | | 3.3968 | 0.03 | 2060640 | 3.8601 | | 3.3889 | 0.03 | 2136960 | 3.8605 | | 3.3808 | 1.03 | 2213280 | 3.8612 | | 3.364 | 0.03 | 2289600 | 3.8615 | | 3.3563 | 1.03 | 2365920 | 3.8631 | | 3.3506 | 0.03 | 2442240 | 3.8637 | | 3.3402 | 1.03 | 2518560 | 3.8635 | | 3.328 | 0.03 | 2594880 | 3.8644 | | 3.3179 | 0.03 | 2671200 | 3.8645 | | 3.3121 | 1.03 | 2747520 | 3.8638 | | 3.3051 | 0.03 | 2823840 | 3.8637 | | 3.3015 | 1.03 | 2900160 | 3.8633 | | 3.2959 | 0.03 | 2976480 | 3.8622 | | 3.2885 | 0.02 | 3052726 | 3.8606 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.13.3
ulichovick/RDL_ppo-LunarLander-v2
ulichovick
2024-02-08T02:51:49Z
4
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-02-08T02:51:28Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 252.33 +/- 40.48 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
LULab/myNLP-Tagging-models
LULab
2024-02-08T02:45:50Z
0
0
null
[ "region:us" ]
null
2024-01-30T16:32:07Z
--- {} --- ### POS Tagging and NER Tagging models for Myanmar language
zheng438/distilgpt2-disease-syptom
zheng438
2024-02-08T02:43:07Z
5
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-08T02:42:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hmone231/mistral-burmese-health
hmone231
2024-02-08T02:35:21Z
0
0
transformers
[ "transformers", "safetensors", "text-generation", "my", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
text-generation
2024-02-07T18:10:15Z
--- library_name: transformers language: - my pipeline_tag: text-generation --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
SolaireOfTheSun/Llama-2-7b-chat-hf-sharded-bf16-feinabgestimmt-adapters-gpt
SolaireOfTheSun
2024-02-08T02:09:23Z
0
0
peft
[ "peft", "arxiv:1910.09700", "base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16", "base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16", "region:us" ]
null
2024-02-08T02:09:21Z
--- library_name: peft base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.8.2
yoonyoon/kb_v4.1_solar
yoonyoon
2024-02-08T02:04:30Z
3
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "solar", "mistral", "pytorch", "solar-ko", "ko", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-08T01:57:09Z
--- language: - ko - en pipeline_tag: text-generation inference: false tags: - solar - mistral - pytorch - solar-ko library_name: transformers license: apache-2.0 --- **Update Log** - 2024.01.08: Initial Test version Release of Solar-Ko # **Open-Solar-Ko** ⭐🇰🇷 Solar-Ko represents an advanced iteration of the upstage/SOLAR-10.7B-v1.0 model, featuring an expanded vocabulary and the inclusion of a Korean corpus for enhanced pretraining. Open-Solar-Ko exclusively utilizes publicly accessible Korean corpora, including sources such as [AI Hub](https://www.aihub.or.kr), [Modu Corpus, 모두의 말뭉치](https://corpus.korean.go.kr/), and [Korean Wikipedia](https://dumps.wikimedia.org/kowiki/). As training was conducted solely with publicly available corpora, this model is open for unrestricted use by everyone, adhering to the Apache2.0 open source License. ## Model Details **Model Developers:** Junbum Lee (Beomi) **Variations:** Solar-Ko is available with one parameter sizes — 10B with Continual Pretrained version. **Input:** The model accepts only text input. **Output:** The model produces text output exclusively. **Model Architecture:** SOLAR-KO-10.7B is an auto-regressive language model that leverages an optimized transformer architecture derived from Llama-2. | |Training Data|Parameters|Content Length|GQA|Tokens|Learning Rate| |---|---|---|---|---|---|---| |SOLAR-KO-10.7B|*A curated mix of Publicly Accessible Korean Corpora*|10.7B|2k|✘|>15B*|5e<sup>-5</sup>| **Training Corpus** The model was trained using selected datasets from AIHub and Modu Corpus. Detailed information about the training datasets is available below: - AI Hub: [corpus/AI_HUB](./corpus/AI_HUB) - Only the `Training` segment of the data was used. - The `Validation` and `Test` segments were deliberately excluded. - Modu Corpus: [corpus/MODU_CORPUS](./corpus/MODU_CORPUS) The final JSONL dataset used to train this model is approximately 61GB in size. Total token count: Approximately 15 billion tokens (*using the expanded tokenizer. With the original SOLAR tokenizer, >60 billion tokens.) **Vocab Expansion** | Model Name | Vocabulary Size | Description | | --- | --- | --- | | Original Solar | 32000 | Sentencepiece BPE | | **Expanded SOLAR-KO-10.7B** | 46592 | Sentencepiece BPE. Added Korean vocab and merges | **Tokenizing "안녕하세요, 오늘은 날씨가 좋네요."** - SOLAR-10.7B: 26 tokens - SOLAR-KO-10.7b: 8 tokens | Model | Tokens | | --- | --- | | SOLAR-10.7B | `['▁', '안', '<0xEB>', '<0x85>', '<0x95>', '하', '세', '요', ',', '▁', '오', '<0xEB>', '<0x8A>', '<0x98>', '은', '▁', '날', '<0xEC>', '<0x94>', '<0xA8>', '가', '▁', '좋', '네', '요', '.']` | | SOLAR-KO-10.7B | `['▁안녕', '하세요', ',', '▁오늘은', '▁날', '씨가', '▁좋네요', '.']` | **Tokenizing "Meet 10.7B Solar: Elevating Performance with Upstage Depth UP Scaling!"** - SOLAR-10.7B: 22 tokens - SOLAR-KO-10.7b: 22 tokens | Model | Tokens | | --- | --- | | SOLAR-10.7B | `['▁Meet', '▁', '1', '0', '.', '7', 'B', '▁Solar', ':', '▁E', 'lev', 'ating', '▁Performance', '▁with', '▁Up', 'stage', '▁Dep', 'th', '▁UP', '▁Scal', 'ing', '!']` | | SOLAR-KO-10.7B | `['▁Meet', '▁', '1', '0', '.', '7', 'B', '▁Solar', ':', '▁E', 'lev', 'ating', '▁Performance', '▁with', '▁Up', 'stage', '▁Dep', 'th', '▁UP', '▁Scal', 'ing', '!']` | # LICENSE Apache 2.0 # **Model Benchmark** ## LM Eval Harness - Korean (polyglot branch) - Used EleutherAI's lm-evaluation-harness https://github.com/EleutherAI/lm-evaluation-harness/tree/polyglot | | 0 | 5 | 10 | 50 | |:---------------------------------|---------:|---------:|---------:|---------:| | kobest_boolq (macro_f1) | 0.853949 | 0.88098 | 0.898139 | 0.902354 | | kobest_copa (macro_f1) | 0.804531 | 0.826736 | 0.837656 | 0.860899 | | kobest_hellaswag (macro_f1) | 0.507174 | 0.500983 | 0.487287 | 0.512182 | | kobest_sentineg (macro_f1) | 0.3517 | 0.972291 | 0.977321 | 0.984884 | | kohatespeech (macro_f1) | 0.258111 | 0.403957 | 0.386808 | 0.462393 | | kohatespeech_apeach (macro_f1) | 0.337667 | 0.651697 | 0.705337 | 0.827757 | | kohatespeech_gen_bias (macro_f1) | 0.124535 | 0.503464 | 0.498501 | 0.443218 | | korunsmile (f1) | 0.3814 | 0.356939 | 0.369989 | 0.296193 | | nsmc (acc) | 0.5356 | 0.87162 | 0.88654 | 0.89632 | | pawsx_ko (acc) | 0.5435 | 0.5245 | 0.5315 | 0.5385 | ## Citation ``` @misc {solar_ko_junbum_2023, author = { {L. Junbum} }, title = { Solar-Ko-10.7b }, year = 2024, url = { https://huggingface.co/beomi/SOLAR-KO-10.7B }, publisher = { Hugging Face } } ``` ## Acknowledgements - Training support was provided by the [TPU Research Cloud](https://sites.research.google/trc/) program. - The training corpus includes data from [AI Hub](https://www.aihub.or.kr/), [Modu Corpus](https://corpus.korean.go.kr/), and [Korean Wikipedia](https://dumps.wikimedia.org/kowiki/).
LDCC/LDCC-SOLAR-10.7B
LDCC
2024-02-08T01:59:04Z
3,603
14
transformers
[ "transformers", "safetensors", "llama", "text-generation", "ko", "arxiv:2312.15166", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-03T03:43:58Z
--- license: cc-by-nc-4.0 language: - ko --- # Model Card for LDCC-SOLAR-10.7B ## Developed by : Wonchul Kim ([Lotte Data Communication](https://www.ldcc.co.kr) AI Technical Team) ## Hardware and Software * **Hardware**: We utilized an A100x4 * 1 for training our model * **Training Factors**: We fine-tuned this model using a combination of the [DeepSpeed library](https://github.com/microsoft/DeepSpeed) and the [HuggingFace TRL Trainer](https://huggingface.co/docs/trl/trainer) / [HuggingFace Accelerate](https://huggingface.co/docs/accelerate/index) ## Method - This model was trained using the learning method introduced in the [SOLAR paper](https://arxiv.org/pdf/2312.15166.pdf). ## Base Model - [yanolja/KoSOLAR-10.7B-v0.1](https://huggingface.co/yanolja/KoSOLAR-10.7B-v0.1) (This model is no longer supported due to a tokenizer issue.) ## Caution - If you want to fine-tune this model, it is recommended to use the [tokenizer.json](https://huggingface.co/LDCC/LDCC-SOLAR-10.7B/blob/v1.1/tokenizer.json) and [tokenizer_config.json](https://huggingface.co/LDCC/LDCC-SOLAR-10.7B/blob/v1.1/tokenizer_config.json) files from revision v1.1.
Anguuuuus/laryngitis
Anguuuuus
2024-02-08T01:55:54Z
145
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "audio-classification", "generated_from_trainer", "base_model:facebook/wav2vec2-base", "base_model:finetune:facebook/wav2vec2-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2024-02-08T01:31:07Z
--- license: apache-2.0 base_model: facebook/wav2vec2-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: laryngitis results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # laryngitis This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7828 - Accuracy: 0.5455 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4888 | 1.0 | 6 | 0.7395 | 0.4091 | | 0.4714 | 2.0 | 12 | 0.7492 | 0.4545 | | 0.4298 | 3.0 | 18 | 0.7774 | 0.5 | | 0.3732 | 4.0 | 24 | 0.7864 | 0.5 | | 0.352 | 5.0 | 30 | 0.7903 | 0.5 | | 0.3147 | 6.0 | 36 | 0.8435 | 0.5 | | 0.2969 | 7.0 | 42 | 0.7719 | 0.5 | | 0.2902 | 8.0 | 48 | 0.7035 | 0.5909 | | 0.238 | 9.0 | 54 | 0.7546 | 0.5909 | | 0.2654 | 10.0 | 60 | 0.7828 | 0.5455 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
tomashs/multiple_choice_cowese_betoLDA_2
tomashs
2024-02-08T01:51:52Z
19
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "generated_from_trainer", "base_model:dccuchile/bert-base-spanish-wwm-cased", "base_model:finetune:dccuchile/bert-base-spanish-wwm-cased", "endpoints_compatible", "region:us" ]
null
2024-02-08T01:51:28Z
--- base_model: dccuchile/bert-base-spanish-wwm-cased tags: - generated_from_trainer model-index: - name: multiple_choice_cowese_betoLDA_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # multiple_choice_cowese_betoLDA_2 This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
LoneStriker/Everyone-Coder-33b-v2-Base-GGUF
LoneStriker
2024-02-08T01:47:43Z
17
3
null
[ "gguf", "merge", "license:other", "endpoints_compatible", "region:us" ]
null
2024-02-08T00:29:12Z
--- license: other license_name: deepseek license_link: https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL tags: - merge --- Everyone-Coder-33b-v2-Base ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/ECrHQnZnv8UM9GUCQtlWW.jpeg) EveryoneLLM series of models made by the community, for the community. This is a coding specific model made using fine-tunes of deekseekcoder-33b-base. This Version 2 of the Everything-Coder-33b model uses the task_arithmetic merging method which has major increases in coding performance as opposed to the ties method. You should find this version having much better coding performance than Version 1, without any of the negative that merging has on the integrity of the model. Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` The models that were used in this merger were as follow: - https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct - https://huggingface.co/codefuse-ai/CodeFuse-DeepSeek-33B - https://huggingface.co/WizardLM/WizardCoder-33B-V1.1 Thank you to the creators of the above ai models, they have full credit for the EveryoneLLM series of models. Without their hard work we wouldnt be able to achieve the great success we have in the open source community. 💗 You can find the write up for merging models here: https://docs.google.com/document/d/1_vOftBnrk9NRk5h10UqrfJ5CDih9KBKL61yvrZtVWPE/edit?usp=sharing Config for the merger can be found bellow: ```yaml models: - model: codefuse-ai_CodeFuse-DeepSeek-33B parameters: weight: 1 - model: deepseek-ai_deepseek-coder-33b-instruct parameters: weight: 1 - model: WizardLM_WizardCoder-33B-V1.1 parameters: weight: 1 merge_method: task_arithmetic base_model: deepseek-ai_deepseek-coder-33b-base parameters: normalize: true int8_mask: true dtype: float16 ```
ikisx6/ericlorran
ikisx6
2024-02-08T01:39:40Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-02-02T01:25:29Z
--- license: creativeml-openrail-m ---
jeiku/Pasta-PrimaMaid-7b_GGUF
jeiku
2024-02-08T01:35:26Z
2
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "base_model:Nitral-Archive/Kunocchini-7b", "base_model:quantized:Nitral-Archive/Kunocchini-7b", "endpoints_compatible", "region:us" ]
null
2024-02-08T00:48:12Z
--- base_model: - Test157t/Kunocchini-7b - Test157t/Pasta-Made_7b library_name: transformers tags: - mergekit - merge --- This is a merge created by https://huggingface.co/Test157t I have merely quantized the model into GGUF. Please visit https://huggingface.co/Test157t/Kunocchini-7b for the original weights. The original description is as follows: # mergedmodel This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. Quants from @s3nh! https://huggingface.co/s3nh/Pasta-PrimaMaid-7b-GGUF ### Models Merged The following models were included in the merge: * [Test157t/Kunocchini-7b](https://huggingface.co/Test157t/Kunocchini-7b) * [Test157t/Pasta-Made_7b](https://huggingface.co/Test157t/Pasta-Made_7b) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: Test157t/Kunocchini-7b layer_range: [0, 32] - model: Test157t/Pasta-Made_7b layer_range: [0, 32] merge_method: slerp base_model: Test157t/Kunocchini-7b parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
kevinautomation/tiny_llama_instruct_generation
kevinautomation
2024-02-08T01:26:33Z
4
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T", "base_model:adapter:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T", "license:apache-2.0", "region:us" ]
null
2024-02-08T01:26:31Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer datasets: - generator base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T model-index: - name: tiny_llama_instruct_generation results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny_llama_instruct_generation This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 2.0919 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 0.03 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.3923 | 0.04 | 20 | 2.3466 | | 2.2664 | 0.08 | 40 | 2.2596 | | 2.1909 | 0.12 | 60 | 2.1966 | | 2.1885 | 0.16 | 80 | 2.1737 | | 2.1536 | 0.2 | 100 | 2.1553 | | 2.1255 | 0.24 | 120 | 2.1426 | | 2.1298 | 0.29 | 140 | 2.1318 | | 2.0497 | 0.33 | 160 | 2.1242 | | 2.0967 | 0.37 | 180 | 2.1198 | | 2.1252 | 0.41 | 200 | 2.1160 | | 2.1051 | 0.45 | 220 | 2.1139 | | 2.0848 | 0.49 | 240 | 2.1121 | | 2.1562 | 0.53 | 260 | 2.1104 | | 2.1043 | 0.57 | 280 | 2.1088 | | 2.0865 | 0.61 | 300 | 2.1075 | | 2.0729 | 0.65 | 320 | 2.1065 | | 2.1046 | 0.69 | 340 | 2.1059 | | 2.1398 | 0.73 | 360 | 2.1050 | | 2.0928 | 0.78 | 380 | 2.1035 | | 2.1055 | 0.82 | 400 | 2.1027 | | 2.0327 | 0.86 | 420 | 2.1017 | | 2.0904 | 0.9 | 440 | 2.1012 | | 2.0922 | 0.94 | 460 | 2.1006 | | 2.0911 | 0.98 | 480 | 2.0997 | | 2.1063 | 1.02 | 500 | 2.0994 | | 2.1296 | 1.06 | 520 | 2.0993 | | 2.1051 | 1.1 | 540 | 2.0986 | | 2.0919 | 1.14 | 560 | 2.0982 | | 2.0608 | 1.18 | 580 | 2.0977 | | 2.0865 | 1.22 | 600 | 2.0966 | | 2.0912 | 1.27 | 620 | 2.0962 | | 2.0858 | 1.31 | 640 | 2.0962 | | 2.0914 | 1.35 | 660 | 2.0961 | | 2.0542 | 1.39 | 680 | 2.0951 | | 2.0939 | 1.43 | 700 | 2.0948 | | 2.0707 | 1.47 | 720 | 2.0942 | | 2.1158 | 1.51 | 740 | 2.0944 | | 2.079 | 1.55 | 760 | 2.0941 | | 2.0232 | 1.59 | 780 | 2.0935 | | 2.0954 | 1.63 | 800 | 2.0934 | | 2.079 | 1.67 | 820 | 2.0939 | | 2.0747 | 1.71 | 840 | 2.0932 | | 2.0881 | 1.76 | 860 | 2.0926 | | 2.0319 | 1.8 | 880 | 2.0928 | | 2.1047 | 1.84 | 900 | 2.0922 | | 2.0383 | 1.88 | 920 | 2.0923 | | 2.0602 | 1.92 | 940 | 2.0923 | | 2.0902 | 1.96 | 960 | 2.0919 | | 2.0845 | 2.0 | 980 | 2.0919 | ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
valurank/seo-headline
valurank
2024-02-08T01:26:06Z
15
0
transformers
[ "transformers", "tensorboard", "safetensors", "pegasus", "text2text-generation", "generated_from_trainer", "base_model:google/pegasus-cnn_dailymail", "base_model:finetune:google/pegasus-cnn_dailymail", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-06T16:58:52Z
--- base_model: google/pegasus-cnn_dailymail tags: - generated_from_trainer model-index: - name: seo-headline_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # seo-headline_2 This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5682 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.8031 | 1.29 | 500 | 0.7142 | | 0.6117 | 2.58 | 1000 | 0.5948 | | 0.5568 | 3.86 | 1500 | 0.5755 | | 0.5219 | 5.15 | 2000 | 0.5682 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
IB13/t5_ppo_model_withoutkl
IB13
2024-02-08T01:17:02Z
2
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:IB13/sft_t5_base_processed_model", "base_model:adapter:IB13/sft_t5_base_processed_model", "region:us" ]
null
2024-02-08T01:16:57Z
--- library_name: peft base_model: IB13/sft_t5_base_processed_model --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure ### Framework versions - PEFT 0.6.2
sungnyun/diffblender
sungnyun
2024-02-08T01:14:51Z
0
1
transformers
[ "transformers", "text-to-image", "en", "arxiv:2305.15194", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-to-image
2023-12-21T13:01:03Z
--- license: apache-2.0 language: - en library_name: transformers pipeline_tag: text-to-image --- <br> # DiffBlender Model Card This repo contains the models from our paper [**DiffBlender: Scalable and Composable Multimodal Text-to-Image Diffusion Models**](https://arxiv.org/abs/2305.15194). ## Model details **Model type:** DiffBlender successfully synthesizes complex combinations of input modalities. It enables flexible manipulation of conditions, providing the customized generation aligned with user preferences. We designed its structure to intuitively extend to additional modalities while achieving a low training cost through a partial update of hypernetworks. We provide its model checkpoint, trained with six modalities: sketch, depth map, grounding box, keypoints, color palette, and style embedding. >> `./checkpoint_latest.pth` **License:** Apache 2.0 License **Where to send questions or comments about the model:** https://github.com/sungnyun/diffblender/issues ## Training dataset [Microsoft COCO 2017 dataset](https://cocodataset.org/#home) <br> More detials are in our project page, https://sungnyun.github.io/diffblender/.
mathreader/q-FrozenLake-v1-4x4-noSlippery
mathreader
2024-02-08T01:10:16Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-02-08T01:10:13Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="mathreader/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
annazdr/new-nace
annazdr
2024-02-08T00:43:06Z
46
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-02-08T00:42:12Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # annazdr/new-nace This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('annazdr/new-nace') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('annazdr/new-nace') model = AutoModel.from_pretrained('annazdr/new-nace') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=annazdr/new-nace) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 1001 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.BatchAllTripletLoss.BatchAllTripletLoss` Parameters of the fit()-Method: ``` { "epochs": 2, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
rasyosef/bert-amharic-tokenizer
rasyosef
2024-02-08T00:31:39Z
0
2
transformers
[ "transformers", "am", "dataset:oscar", "dataset:mc4", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-02-08T00:10:43Z
--- license: mit datasets: - oscar - mc4 language: - am library_name: transformers --- # Amharic WordPiece Tokenizer This repo contains a **WordPiece** tokenizer trained on the **Amharic** subset of the [oscar](https://huggingface.co/datasets/oscar) and [mc4](https://huggingface.co/datasets/mc4) datasets. It's the same as the **BERT** tokenizer but trained from scratch on an amharic dataset with a vocabulary size of `30522`. # How to use You can load the tokenizer from huggingface hub as follows. ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("rasyosef/bert-amharic-tokenizer") tokenizer.tokenize("የዓለምአቀፉ ነጻ ንግድ መስፋፋት ድህነትን ለማሸነፍ በሚደረገው ትግል አንዱ ጠቃሚ መሣሪያ ሊሆን መቻሉ ብዙ የሚነገርለት ጉዳይ ነው።") ``` Output: ```python ['የዓለም', '##አቀፉ', 'ነጻ', 'ንግድ', 'መስፋፋት', 'ድህነትን', 'ለማሸነፍ', 'በሚደረገው', 'ትግል', 'አንዱ', 'ጠቃሚ', 'መሣሪያ', 'ሊሆን', 'መቻሉ', 'ብዙ', 'የሚነገርለት', 'ጉዳይ', 'ነው', '።'] ```
smotoc/foxy_mistral7B_unsloth_4k
smotoc
2024-02-08T00:26:25Z
14
0
transformers
[ "transformers", "pytorch", "gguf", "mistral", "text-generation", "text-generation-inference", "unsloth", "en", "base_model:unsloth/mistral-7b-bnb-4bit", "base_model:quantized:unsloth/mistral-7b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-02-08T00:09:47Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - gguf base_model: unsloth/mistral-7b-bnb-4bit --- # Uploaded model - **Developed by:** smotoc - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
celik-muhammed/multi-qa-mpnet-base-dot-v1-finetuned-dtc-zoomcamp
celik-muhammed
2024-02-08T00:21:13Z
5
0
sentence-transformers
[ "sentence-transformers", "pytorch", "tflite", "safetensors", "mpnet", "feature-extraction", "sentence-similarity", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-02-08T00:11:24Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # celik-muhammed/multi-qa-mpnet-base-dot-v1-finetuned-dtc-zoomcamp This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('celik-muhammed/multi-qa-mpnet-base-dot-v1-finetuned-dtc-zoomcamp') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=celik-muhammed/multi-qa-mpnet-base-dot-v1-finetuned-dtc-zoomcamp) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 794 with parameters: ``` {'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 989 with parameters: ``` {'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.OnlineContrastiveLoss.OnlineContrastiveLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 1.2800000000000005e-10 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 80.0, "weight_decay": 0.1 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': True, 'pooling_mode_mean_sqrt_len_tokens': True, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) (2): Dense({'in_features': 3072, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) (3): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
askasok/PrayerPortal
askasok
2024-02-08T00:18:27Z
0
0
null
[ "arxiv:1910.09700", "region:us" ]
null
2024-02-08T00:17:15Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
noza-kit/Adapter_llama2_translate_Q_enpt_ex2-3epoch
noza-kit
2024-02-08T00:14:31Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "region:us" ]
null
2024-02-07T16:35:34Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.8.2
arieridwans/phi_2-finetuned-lyrics
arieridwans
2024-02-07T23:59:58Z
3
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-07T23:55:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
adarshheg/llama2-13b-finetuned-100-v1
adarshheg
2024-02-07T23:54:20Z
0
0
null
[ "safetensors", "autotrain", "text-generation", "license:other", "endpoints_compatible", "region:us" ]
text-generation
2024-02-07T23:54:15Z
--- tags: - autotrain - text-generation widget: - text: "I love AutoTrain because " license: other --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
rorito/concept-perfect-eyes
rorito
2024-02-07T23:41:53Z
455
1
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
text-to-image
2024-02-07T23:41:41Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: '-' output: url: images/ComfyUI_00636_.jpeg base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: null --- # concept-perfect-eyes <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/rorito/concept-perfect-eyes/tree/main) them in the Files & versions tab.
zheng438/experiments
zheng438
2024-02-07T23:31:22Z
2
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T", "base_model:adapter:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T", "license:apache-2.0", "region:us" ]
null
2024-02-07T23:30:01Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T model-index: - name: experiments results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # experiments This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0912 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 123 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.1412 | 0.2 | 311 | 0.1461 | | 0.1095 | 0.4 | 622 | 0.1154 | | 0.089 | 0.6 | 933 | 0.1029 | | 0.0875 | 0.8 | 1244 | 0.0912 | ### Framework versions - PEFT 0.8.1 - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.16.1 - Tokenizers 0.15.1
zheng438/TinyLlama-1.1B-fine-tuned-predict
zheng438
2024-02-07T23:30:00Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-07T23:28:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
joislosinghermind/lola-gunvolt
joislosinghermind
2024-02-07T23:20:15Z
1
1
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:unlicense", "region:us" ]
text-to-image
2024-02-07T23:20:12Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: "UNICODE\0\02\0d\0,\0 \0m\0a\0s\0t\0e\0r\0p\0i\0e\0c\0e\0,\0 \0b\0e\0s\0t\0 \0q\0u\0a\0l\0i\0t\0y\0,\0 \0a\0n\0i\0m\0e\0,\0 \0h\0i\0g\0h\0l\0y\0 \0d\0e\0t\0a\0i\0l\0e\0d\0 \0f\0a\0c\0e\0,\0 \0h\0i\0g\0h\0l\0y\0 \0d\0e\0t\0a\0i\0l\0e\0d\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0,\0 \0p\0e\0r\0f\0e\0c\0t\0 \0l\0i\0g\0h\0t\0i\0n\0g\0,\0 \0l\0o\0l\0a\0,\0 \0b\0l\0u\0e\0 \0e\0y\0e\0s\0,\0 \0g\0r\0e\0e\0n\0_\0h\0a\0i\0r\0,\0 \0c\0i\0t\0y\0s\0c\0a\0p\0e\0,\0 \0f\0u\0l\0l\0_\0b\0o\0d\0y\0,\0 \0s\0o\0l\0o\0,\0 \0s\0o\0l\0o\0 \0f\0o\0c\0u\0s\0,\0 \0t\0-\0s\0h\0i\0r\0t\0,\0 \0 \0s\0h\0o\0r\0t\0s\0,\0 \0<\0l\0o\0r\0a\0:\0l\0o\0l\0a\0:\01\0>\0" output: url: images/00492-abyssorangemix3AOM3_aom3a1b_3939236143.jpeg base_model: runwayml/stable-diffusion-v1-5 instance_prompt: lola license: unlicense --- # lola-gunvolt <Gallery /> ## Trigger words You should use `lola` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/joislosinghermind/lola-gunvolt/tree/main) them in the Files & versions tab.
tensor-diffusion/Realistic_Stock_Photo_v2
tensor-diffusion
2024-02-07T22:51:26Z
0
0
null
[ "safetensors", "realistic", "civitai", "text-to-image", "region:us" ]
text-to-image
2024-02-07T22:33:06Z
--- pipeline_tag: text-to-image tags: - safetensors - realistic - civitai --- ### Original model: https://civitai.com/models/139565/realistic-stock-photo?modelVersionId=294470 #### Utilize it judiciously and adhere to the legal regulations established by authorities
tomashs/multiple_choice_cowese_beto_2
tomashs
2024-02-07T22:08:25Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "multiple-choice", "generated_from_trainer", "base_model:dccuchile/bert-base-spanish-wwm-cased", "base_model:finetune:dccuchile/bert-base-spanish-wwm-cased", "endpoints_compatible", "region:us" ]
multiple-choice
2024-02-07T22:08:04Z
--- base_model: dccuchile/bert-base-spanish-wwm-cased tags: - generated_from_trainer model-index: - name: multiple_choice_cowese_beto_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # multiple_choice_cowese_beto_2 This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
siddharthjain16/deepseek-math-7b-instruct-gguf
siddharthjain16
2024-02-07T21:56:41Z
0
1
null
[ "license:other", "region:us" ]
null
2024-02-07T21:56:41Z
--- license: other license_name: deepseek license_link: https://github.com/deepseek-ai/DeepSeek-Math/blob/main/LICENSE-MODEL ---
APaul1/roberta-large-peft-ia3
APaul1
2024-02-07T21:53:06Z
0
0
transformers, peft
[ "transformers, peft", "safetensors", "dataset:glue", "region:us" ]
null
2024-02-05T02:19:23Z
--- library_name: transformers, peft datasets: - glue --- # Model Card for Model ID This model is a peft version of the roberta-large finetuned on the mrpc task of glue dataset ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This model is finetuned on the mrpc task of Glue dataset which essentially compares two statements and decides and gets a label as to if they are equivalent or not. Dataset example is shown below ![Screenshot 2024-02-07 at 4.40.51 PM.png](https://cdn-uploads.huggingface.co/production/uploads/6461ad7196259bec21d4f206/w1ZJOYpkv6KvD0mfMkGDi.png) The model was tested on the testing set and gave an accuracy of 86.6% and F1 score of 90% Similar fine tuning and evaluation can be done on the other tasks of the GLUE dataset by loading the correspodning config files or defining appropriate LORA config uing sample code : - **Developed by:** PEFT library example - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** roberta-large
hanspeterlyngsoeraaschoujensen/deepseek-math-7b-instruct-GPTQ
hanspeterlyngsoeraaschoujensen
2024-02-07T21:18:11Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "gptq", "region:us" ]
text-generation
2024-02-07T21:16:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
google/metricx-23-xl-v2p0
google
2024-02-07T21:15:48Z
704
1
transformers
[ "transformers", "pytorch", "mt5", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-02-07T16:34:17Z
--- license: apache-2.0 --- # MetricX-23 *This is not an officially supported Google product.* **GitHub repository: [https://github.com/google-research/metricx](https://github.com/google-research/metricx)** This repository contains the MetricX-23 models, a family of models for automatic evaluation of translations that were proposed in the WMT'23 Metrics Shared Task submission [MetricX-23: The Google Submission to the WMT 2023 Metrics Shared Task](https://aclanthology.org/2023.wmt-1.63/). The models were trained in [T5X](https://github.com/google-research/t5x) and then converted for use in PyTorch. ## Available Models There are 6 models available on HuggingFace that vary in the number of parameters and whether or not the model is reference-based or reference-free (also known as quality estimation, or QE): * [MetricX-23-XXL](https://huggingface.co/google/metricx-23-large-v2p0) * [MetricX-23-XL](https://huggingface.co/google/metricx-23-xl-v2p0) * [MetricX-23-Large](https://huggingface.co/google/metricx-23-xxl-v2p0) * [MetricX-23-QE-XXL](https://huggingface.co/google/metricx-23-qe-large-v2p0) * [MetricX-23-QE-XL](https://huggingface.co/google/metricx-23-qe-xl-v2p0) * [MetricX-23-QE-Large](https://huggingface.co/google/metricx-23-qe-xxl-v2p0) We recommend using the XXL model versions for the best agreement with human judgments of translation quality, the Large versions for best speed, and the XL for an intermediate use case. ## Changes to the WMT'23 Submission These models available here are most similar to the primary submission to the WMT'23 Metrics Shared Task. They are initialized with [mT5](https://aclanthology.org/2021.naacl-main.41/) then fine-tuned on a combination of direct assessment and MQM data. However, we made some changes that make these models different from the WMT'23 submissions. First, the models are trained to regress the actual MQM score rather than a normalized score between 0 and 1. **That means the output from the MetricX-23 models is a score in the range [0, 25] where lower is better (i.e., it predicts an error score).** Second, these models were trained with a larger variety of synthetic data that makes them more robust to translation edge cases like over- and undertranslation, described in more detail in the following section. ### Synthetic Data In order for our MetricX models to learn to identify certain types of bad translations that are not sufficiently (or at all) represented in the regular training data, we created synthetic examples and mixed them in during training. The synthetic training data was generated from the DA datasets ranging from WMT15 to WMT21 (~ 43 language pairs). In most cases, the synthetic examples have the candidate translation manipulated so as to turn it into a bad translation with a specific issue commonly unrecognized by learned metrics. The table below provides an overview of the various failure modes that we considered, including brief descriptions of how we prepared the synthetic data to address them. | Failure mode | Synthetic example description | | ----------- | ----------- | | Undertranslation | Candidate translation with an arbitrary sentence removed (if multi-sentence); alternatively, candidate with a certain proportion of words removed from the end. | | Overtranslation | Candidate translation duplicated (with space in between). | | Fluent but unrelated translation | Arbitrary reference of a similar length from the dataset. | | Gibberish | Text of a similar length as the reference, generated by sampling words from the reference translation vocabulary (built from all references in the data). | | Missing punctuation | Reference translation with the end punctuation removed (11 punctuation symbols considered). | | Latin instead of Chinese/Japanese or Hindi/Bengali punctuation | Candidate translation with the language-specific punctuation symbol at the end replaced with the Latin equivalent (e.g., "." instead of "。" or "।"); alternatively, the punctuation symbol is replaced with the Latin equivalent in the reference, keeping the correct one in the candidate. | | Reference-matching translation | Reference translation copied as the candidate translation (unlike the rest of the synthetic data, these examples are meant to train the metric to predict a perfect score for candidates matching the reference). | Examples from the first 4 categories were assigned a label corresponding to the worst score on the given rating scale (e.g., 25 when mixed with MQM training data), whereas the reference-matching translation examples are assigned the best score (e.g., 0 when used with MQM data). The missing/incorrect punctuation examples were labeled with a score slightly worse than perfect. Note that some of the synthetic datasets are only meaningful in the reference-based scenario, and we thus excluded them when training a QE variant of MetricX. These are the Latin-vs-special punctuation and the reference-matching translation examples. Most of the synthetic training sets were created using stratified sampling across target languages, taking 500 examples per target language. One exception is the missing punctuation set, which used a stratified sample across different punctuation symbols instead. When training MetricX, a small proportion of the synthetic examples was mixed with the regular training examples. During the first-stage fine-tuning on DA data, each synthetic training set constituted between 0.1% and 1% of all training examples, whereas in the second-stage fine-tuning on MQM data we used an even smaller proportion, around 0.05%. As for evaluating the effect of the synthetic training data on the model's performance, the DEMETR challenge set - which we originally used to evaluate the models submitted to the WMT23 Metrics Shared Task - was not adequate anymore. We therefore created a new DEMETR-style test set based on the WMT22 DA data, with examples constructed analogically to the synthetic training examples, as described above. This test set helped us determine the right proportions of synthetic data for fine-tuning in order to make MetricX robust for the failure modes in consideration, without sacrificing the system- and segment-level correlations with human ratings. ## Usage The code for using MetricX models can be found at [https://github.com/google-research/metricx](https://github.com/google-research/metricx). The repository contains example prediction scripts, described below. The `metricx23/predict.py` script contains an example for how to run inference on the models. ### Reference-Based Example usage for a reference-based model: ```bash python -m metricx23.predict \ --tokenizer google/mt5-xl \ --model_name_or_path google/metricx-23-xl-v2p0 \ --max_input_length 1024 \ --batch_size 1 \ --input_file input.jsonl \ --output_file output.jsonl ``` `input.jsonl` is expected to have 1 serialized JSON object per line with `"reference"` and `"hypothesis"` fields. The output jsonl will be parallel to `input.jsonl` but additionally contain a `"prediction"` field with the predicted score. Note that the model was trained with a maximum input length of 1024 tokens, so significantly increasing that value may lead to unpredictable behavior. ### Reference-Free Example usage for a reference-free model: ```bash python -m metricx23.predict \ --tokenizer google/mt5-xl \ --model_name_or_path google/metricx-23-qe-xl-v2p0 \ --max_input_length 1024 \ --batch_size 1 \ --input_file input.jsonl \ --output_file output.jsonl \ --qe ``` `input.jsonl` is expected to have 1 serialized JSON object per line with `"source"` and `"hypothesis"` fields. The output jsonl will be parallel to `input.jsonl` but additionally contain a `"prediction"` field with the predicted score. ## Meta-Evaluation The `metricx23/evaluate.py` script contains code to calculate various correlations between the MetricX-23 scores and MQM ratings of translation quality using the [MT Metrics Eval](https://github.com/google-research/mt-metrics-eval) library. Example usage: ```bash python -m metricx23.evaluate \ --dataset wmt22 \ --lp en-de \ --input_file input.jsonl \ --output_file output.json ``` `input.jsonl` is expected to have one JSON object serialized per line. Each JSON object is expected to contain 4 fields: * `"system_id"`: The name of the system that generated the translation. * `"segment_id"`: The 0-based index of the corresponding segment in the MT Metrics Eval data. * `"label"`: The ground-truth translation quality score (with higher is better). * `"prediction"`: The model predicted translation quality score (with lower is better; the script negates the scores so higher is better). The script will calculate the 4 agreement/correlations that were used in the WMT'23 Shared Task. Below are the results for the MetricX-23 models on the WMT'22 Metrics Shared Task data: English-German: | Model | System-Level Accuracy | System-Level Pearson | Segment-Level Pearson | Segment-Level Pairwise Acc | | ----------- | ----------- | ----------- | ----------- | ----------- | | MetricX-23-XXL | 0.795 | 0.835 | 0.546 | 0.619 | | MetricX-23-XL | 0.756 | 0.813 | 0.540 | 0.605 | | MetricX-23-Large | 0.769 | 0.759 | 0.507 | 0.595 | | MetricX-23-QE-XXL | 0.769 | 0.830 | 0.490 | 0.606 | | MetricX-23-QE-XL | 0.718 | 0.684 | 0.421 | 0.594 | | MetricX-23-QE-Large | 0.744 | 0.671 | 0.387 | 0.579 | English-Russian: | Model | System-Level Accuracy | System-Level Pearson | Segment-Level Pearson | Segment-Level Pairwise Acc | | ----------- | ----------- | ----------- | ----------- | ----------- | | MetricX-23-XXL | 0.905 | 0.943 | 0.477 | 0.609 | | MetricX-23-XL | 0.876 | 0.906 | 0.498 | 0.589 | | MetricX-23-Large | 0.876 | 0.841 | 0.474 | 0.569 | | MetricX-23-QE-XXL | 0.895 | 0.940 | 0.470 | 0.602 | | MetricX-23-QE-XL | 0.848 | 0.861 | 0.415 | 0.570 | | MetricX-23-QE-Large | 0.819 | 0.778 | 0.411 | 0.551 | Chinese-English: | Model | System-Level Accuracy | System-Level Pearson | Segment-Level Pearson | Segment-Level Pairwise Acc | | ----------- | ----------- | ----------- | ----------- | ----------- | | MetricX-23-XXL | 0.868 | 0.919 | 0.605 | 0.551 | | MetricX-23-XL | 0.868 | 0.924 | 0.584 | 0.543 | | MetricX-23-Large | 0.857 | 0.919 | 0.555 | 0.539 | | MetricX-23-QE-XXL | 0.857 | 0.928 | 0.573 | 0.544 | | MetricX-23-QE-XL | 0.802 | 0.879 | 0.546 | 0.529 | | MetricX-23-QE-Large | 0.758 | 0.904 | 0.522 | 0.529 | The `metricx23/evaluate_wmt23.py` script re-calculates the average correlation score that was used to rank submissions from the [WMT'23 Shared Task](https://www2.statmt.org/wmt23/pdf/2023.wmt-1.51.pdf). Example usage: ```bash python -m metricx23.evaluate_wmt23 \ --en_de predictions_ende.jsonl \ --he_en predictions_heen.jsonl \ --zh_en predictions_zhen.jsonl \ --output_file output.json ``` Each of the 3 input files is expected to be in the same format as described above. Each file should correspond to running inference on each of the language pairs from the WMT'23 dataset. The results for each of the models is the following: | Model | Average Correlation | | ----------- | ----------- | | MetricX-23-XXL | 0.812 | | MetricX-23-XL | 0.813 | | MetricX-23-Large | 0.794 | | MetricX-23-QE-XXL | 0.797 | | MetricX-23-QE-XL | 0.767 | | MetricX-23-QE-Large | 0.762 | ## Citation If you use MetricX-23 in your research, please cite the following publication: ```bibtex @inproceedings{juraska-etal-2023-metricx, title = {{MetricX-23: The Google Submission to the WMT 2023 Metrics Shared Task}}, author = "Juraska, Juraj and Finkelstein, Mara and Deutsch, Daniel and Siddhant, Aditya and Mirzazadeh, Mehdi and Freitag, Markus", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.63", doi = "10.18653/v1/2023.wmt-1.63", pages = "756--767", } ```
Jimmyhd/mistral7btimebookFinetune50rows
Jimmyhd
2024-02-07T21:13:25Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "autotrain", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-07T21:04:28Z
--- tags: - autotrain - text-generation widget: - text: "I love AutoTrain because " license: other --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
andrewatef/MoMask-test
andrewatef
2024-02-07T21:12:35Z
0
0
null
[ "arxiv:2312.00063", "region:us" ]
null
2024-02-07T13:33:10Z
--- title: MoMask emoji: 🎭 colorFrom: pink colorTo: purple sdk: gradio sdk_version: 3.24.1 app_file: app.py pinned: false --- # MoMask: Generative Masked Modeling of 3D Human Motions ## [[Project Page]](https://ericguo5513.github.io/momask) [[Paper]](https://arxiv.org/abs/2312.00063) ![teaser_image](https://ericguo5513.github.io/momask/static/images/teaser.png) If you find our code or paper helpful, please consider citing: ``` @article{guo2023momask, title={MoMask: Generative Masked Modeling of 3D Human Motions}, author={Chuan Guo and Yuxuan Mu and Muhammad Gohar Javed and Sen Wang and Li Cheng}, year={2023}, eprint={2312.00063}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ## :postbox: News 📢 **2023-12-19** --- Release scripts for temporal inpainting. 📢 **2023-12-15** --- Release codes and models for momask. Including training/eval/generation scripts. 📢 **2023-11-29** --- Initialized the webpage and git project. ## :round_pushpin: Get You Ready <details> ### 1. Conda Environment ``` conda env create -f environment.yml conda activate momask pip install git+https://github.com/openai/CLIP.git ``` We test our code on Python 3.7.13 and PyTorch 1.7.1 ### 2. Models and Dependencies #### Download Pre-trained Models ``` bash prepare/download_models.sh ``` #### Download Evaluation Models and Gloves For evaluation only. ``` bash prepare/download_evaluator.sh bash prepare/download_glove.sh ``` #### Troubleshooting To address the download error related to gdown: "Cannot retrieve the public link of the file. You may need to change the permission to 'Anyone with the link', or have had many accesses". A potential solution is to run `pip install --upgrade --no-cache-dir gdown`, as suggested on https://github.com/wkentaro/gdown/issues/43. This should help resolve the issue. #### (Optional) Download Mannually Visit [[Google Drive]](https://drive.google.com/drive/folders/1b3GnAbERH8jAoO5mdWgZhyxHB73n23sK?usp=drive_link) to download the models and evaluators mannually. ### 3. Get Data You have two options here: * **Skip getting data**, if you just want to generate motions using *own* descriptions. * **Get full data**, if you want to *re-train* and *evaluate* the model. **(a). Full data (text + motion)** **HumanML3D** - Follow the instruction in [HumanML3D](https://github.com/EricGuo5513/HumanML3D.git), then copy the result dataset to our repository: ``` cp -r ../HumanML3D/HumanML3D ./dataset/HumanML3D ``` **KIT**-Download from [HumanML3D](https://github.com/EricGuo5513/HumanML3D.git), then place result in `./dataset/KIT-ML` #### </details> ## :rocket: Demo <details> ### (a) Generate from a single prompt ``` python gen_t2m.py --gpu_id 1 --ext exp1 --text_prompt "A person is running on a treadmill." ``` ### (b) Generate from a prompt file An example of prompt file is given in `./assets/text_prompt.txt`. Please follow the format of `<text description>#<motion length>` at each line. Motion length indicates the number of poses, which must be integeter and will be rounded by 4. In our work, motion is in 20 fps. If you write `<text description>#NA`, our model will determine a length. Note once there is **one** NA, all the others will be **NA** automatically. ``` python gen_t2m.py --gpu_id 1 --ext exp2 --text_path ./assets/text_prompt.txt ``` A few more parameters you may be interested: * `--repeat_times`: number of replications for generation, default `1`. * `--motion_length`: specify the number of poses for generation, only applicable in (a). The output files are stored under folder `./generation/<ext>/`. They are * `numpy files`: generated motions with shape of (nframe, 22, 3), under subfolder `./joints`. * `video files`: stick figure animation in mp4 format, under subfolder `./animation`. * `bvh files`: bvh files of the generated motion, under subfolder `./animation`. We also apply naive foot ik to the generated motions, see files with suffix `_ik`. It sometimes works well, but sometimes will fail. </details> ## :dancers: Visualization <details> All the animations are manually rendered in blender. We use the characters from [mixamo](https://www.mixamo.com/#/). You need to download the characters in T-Pose with skeleton. ### Retargeting For retargeting, we found rokoko usually leads to large error on foot. On the other hand, [keemap.rig.transfer](https://github.com/nkeeline/Keemap-Blender-Rig-ReTargeting-Addon/releases) shows more precise retargetting. You could watch the [tutorial](https://www.youtube.com/watch?v=EG-VCMkVpxg) here. Following these steps: * Download keemap.rig.transfer from the github, and install it in blender. * Import both the motion files (.bvh) and character files (.fbx) in blender. * `Shift + Select` the both source and target skeleton. (Do not need to be Rest Position) * Switch to `Pose Mode`, then unfold the `KeeMapRig` tool at the top-right corner of the view window. * Load and read the bone mapping file `./assets/mapping.json`(or `mapping6.json` if it doesn't work). This file is manually made by us. It works for most characters in mixamo. You could make your own. * Adjust the `Number of Samples`, `Source Rig`, `Destination Rig Name`. * Clik `Transfer Animation from Source Destination`, wait a few seconds. We didn't tried other retargetting tools. Welcome to comment if you find others are more useful. ### Scene We use this [scene](https://drive.google.com/file/d/1lg62nugD7RTAIz0Q_YP2iZsxpUzzOkT1/view?usp=sharing) for animation. </details> ## :clapper: Temporal Inpainting <details> We conduct mask-based editing in the m-transformer stage, followed by the regeneration of residual tokens for the entire sequence. To load your own motion, provide the path through `--source_motion`. Utilize `-msec` to specify the mask section, supporting either ratio or frame index. For instance, `-msec 0.3,0.6` with `max_motion_length=196` is equivalent to `-msec 59,118`, indicating the editing of the frame section [59, 118]. ``` python edit_t2m.py --gpu_id 1 --ext exp3 --use_res_model -msec 0.4,0.7 --text_prompt "A man picks something from the ground using his right hand." ``` Note: Presently, the source motion must adhere to the format of a HumanML3D dim-263 feature vector. An example motion vector data from the HumanML3D test set is available in `example_data/000612.npy`. To process your own motion data, you can utilize the `process_file` function from `utils/motion_process.py`. </details> ## :space_invader: Train Your Own Models <details> **Note**: You have to train RVQ **BEFORE** training masked/residual transformers. The latter two can be trained simultaneously. ### Train RVQ ``` python train_vq.py --name rvq_name --gpu_id 1 --dataset_name t2m --batch_size 512 --num_quantizers 6 --max_epoch 500 --quantize_drop_prob 0.2 ``` ### Train Masked Transformer ``` python train_t2m_transformer.py --name mtrans_name --gpu_id 2 --dataset_name t2m --batch_size 64 --vq_name rvq_name ``` ### Train Residual Transformer ``` python train_res_transformer.py --name rtrans_name --gpu_id 2 --dataset_name t2m --batch_size 64 --vq_name rvq_name --cond_drop_prob 0.2 --share_weight ``` * `--dataset_name`: motion dataset, `t2m` for HumanML3D and `kit` for KIT-ML. * `--name`: name your model. This will create to model space as `./checkpoints/<dataset_name>/<name>` * `--gpu_id`: GPU id. * `--batch_size`: we use `512` for rvq training. For masked/residual transformer, we use `64` on HumanML3D and `16` for KIT-ML. * `--num_quantizers`: number of quantization layers, `6` is used in our case. * `--quantize_drop_prob`: quantization dropout ratio, `0.2` is used. * `--vq_name`: when training masked/residual transformer, you need to specify the name of rvq model for tokenization. * `--cond_drop_prob`: condition drop ratio, for classifier-free guidance. `0.2` is used. * `--share_weight`: whether to share the projection/embedding weights in residual transformer. All the pre-trained models and intermediate results will be saved in space `./checkpoints/<dataset_name>/<name>`. </details> ## :book: Evaluation <details> ### Evaluate RVQ Reconstruction: HumanML3D: ``` python eval_t2m_vq.py --gpu_id 0 --name rvq_nq6_dc512_nc512_noshare_qdp0.2 --dataset_name t2m --ext rvq_nq6 ``` KIT-ML: ``` python eval_t2m_vq.py --gpu_id 0 --name rvq_nq6_dc512_nc512_noshare_qdp0.2_k --dataset_name kit --ext rvq_nq6 ``` ### Evaluate Text2motion Generation: HumanML3D: ``` python eval_t2m_trans_res.py --res_name tres_nlayer8_ld384_ff1024_rvq6ns_cdp0.2_sw --dataset_name t2m --name t2m_nlayer8_nhead6_ld384_ff1024_cdp0.1_rvq6ns --gpu_id 1 --cond_scale 4 --time_steps 10 --ext evaluation ``` KIT-ML: ``` python eval_t2m_trans_res.py --res_name tres_nlayer8_ld384_ff1024_rvq6ns_cdp0.2_sw_k --dataset_name kit --name t2m_nlayer8_nhead6_ld384_ff1024_cdp0.1_rvq6ns_k --gpu_id 0 --cond_scale 2 --time_steps 10 --ext evaluation ``` * `--res_name`: model name of `residual transformer`. * `--name`: model name of `masked transformer`. * `--cond_scale`: scale of classifer-free guidance. * `--time_steps`: number of iterations for inference. * `--ext`: filename for saving evaluation results. The final evaluation results will be saved in `./checkpoints/<dataset_name>/<name>/eval/<ext>.log` </details> ## Acknowlegements We sincerely thank the open-sourcing of these works where our code is based on: [deep-motion-editing](https://github.com/DeepMotionEditing/deep-motion-editing), [Muse](https://github.com/lucidrains/muse-maskgit-pytorch), [vector-quantize-pytorch](https://github.com/lucidrains/vector-quantize-pytorch), [T2M-GPT](https://github.com/Mael-zys/T2M-GPT), [MDM](https://github.com/GuyTevet/motion-diffusion-model/tree/main) and [MLD](https://github.com/ChenFengYe/motion-latent-diffusion/tree/main) ## License This code is distributed under an [MIT LICENSE](https://github.com/EricGuo5513/momask-codes/tree/main?tab=MIT-1-ov-file#readme). Note that our code depends on other libraries, including SMPL, SMPL-X, PyTorch3D, and uses datasets which each have their own respective licenses that must also be followed.
JiajingChen/d
JiajingChen
2024-02-07T21:09:03Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-02-07T21:03:04Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: d results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
gayanin/bart-noised-with-gcd-dist-0.4
gayanin
2024-02-07T21:08:50Z
23
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:facebook/bart-base", "base_model:finetune:facebook/bart-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-07T19:03:27Z
--- license: apache-2.0 base_model: facebook/bart-base tags: - generated_from_trainer model-index: - name: bart-noised-with-gcd-dist-0.4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-noised-with-gcd-dist-0.4 This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
gayanin/bart-noised-with-gcd-dist-0.3
gayanin
2024-02-07T21:08:46Z
3
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:facebook/bart-base", "base_model:finetune:facebook/bart-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-07T17:29:08Z
--- license: apache-2.0 base_model: facebook/bart-base tags: - generated_from_trainer model-index: - name: bart-noised-with-gcd-dist-0.3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-noised-with-gcd-dist-0.3 This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
gayanin/bart-noised-with-gcd-dist-0.1
gayanin
2024-02-07T21:08:27Z
3
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:facebook/bart-base", "base_model:finetune:facebook/bart-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-07T17:28:08Z
--- license: apache-2.0 base_model: facebook/bart-base tags: - generated_from_trainer model-index: - name: bart-noised-with-gcd-dist-0.1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-noised-with-gcd-dist-0.1 This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
danaleee/Long_rank10_iter500_valprompt
danaleee
2024-02-07T21:07:20Z
2
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:CompVis/stable-diffusion-v1-4", "base_model:adapter:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-02-07T18:44:33Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 instance_prompt: a photo of sks rc_car tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - danaleee/Long_rank10_iter500_valprompt These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks rc_car using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: False.
Shaleen123/code-yi-6b
Shaleen123
2024-02-07T21:05:04Z
4
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-01-28T17:46:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ClementeH/faisan-7b
ClementeH
2024-02-07T21:03:48Z
2
0
peft
[ "peft", "region:us" ]
null
2024-02-07T17:56:15Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0