modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-07-14 12:27:51
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
520 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-07-14 12:25:52
card
stringlengths
11
1.01M
DrishtiSharma/llama-pro-8b-tweet-summarization-gradnorm-0.3
DrishtiSharma
2024-02-01T08:21:24Z
1
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:dialogstudio", "base_model:TencentARC/LLaMA-Pro-8B", "base_model:adapter:TencentARC/LLaMA-Pro-8B", "license:llama2", "region:us" ]
null
2024-02-01T06:31:57Z
--- license: llama2 library_name: peft tags: - trl - sft - generated_from_trainer datasets: - dialogstudio base_model: TencentARC/LLaMA-Pro-8B model-index: - name: llama-pro-8b-tweet-summarization-gradnorm-0.3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama-pro-8b-tweet-summarization-gradnorm-0.3 This model is a fine-tuned version of [TencentARC/LLaMA-Pro-8B](https://huggingface.co/TencentARC/LLaMA-Pro-8B) on the dialogstudio dataset. It achieves the following results on the evaluation set: - Loss: 2.9796 - Rouge Scores: {'rouge1': 93.71888929189157, 'rouge2': 77.8377567936117, 'rougeL': 64.47906852741538, 'rougeLsum': 93.71298018429633} - Bleu Scores: [0.9470990193868204, 0.9341779145832757, 0.9064440397746264, 0.8744914403659334] - Gen Len: 463.0182 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 7 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge Scores | Bleu Scores | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------:|:--------:| | 1.9065 | 1.0 | 220 | 1.8530 | {'rouge1': 92.83694737064799, 'rouge2': 78.72458121869542, 'rougeL': 67.88788283384865, 'rougeLsum': 92.83768512059282} | [0.8739198483584956, 0.8530170264142946, 0.8271978418182495, 0.7998377773703629] | 463.0182 | | 1.6363 | 2.0 | 440 | 1.8633 | {'rouge1': 93.54135671371444, 'rouge2': 78.96116387599493, 'rougeL': 67.77857901494997, 'rougeLsum': 93.54432289584433} | [0.8758125801988195, 0.8577741180618648, 0.8322886881519586, 0.80457236049974] | 463.0182 | | 1.2817 | 3.0 | 660 | 2.0098 | {'rouge1': 87.30764070509844, 'rouge2': 73.12328274037898, 'rougeL': 62.00625532521349, 'rougeLsum': 87.29149649901954} | [0.8757949025917542, 0.8593181834244542, 0.8334473061685955, 0.8048319452251607] | 463.0182 | | 0.9049 | 4.0 | 880 | 2.2481 | {'rouge1': 87.35996946418575, 'rouge2': 72.87802745947901, 'rougeL': 61.35206821444361, 'rougeLsum': 87.32662841081371} | [0.8755472589597261, 0.859572654041077, 0.8333237300074641, 0.804082483213136] | 463.0182 | | 0.5916 | 5.0 | 1100 | 2.5061 | {'rouge1': 78.38431994557745, 'rouge2': 64.89809559762811, 'rougeL': 53.805209482421525, 'rougeLsum': 78.30608179426231} | [0.747179346815877, 0.7352208249958618, 0.7126103103040894, 0.6869428956670465] | 463.0182 | | 0.3898 | 6.0 | 1320 | 2.8150 | {'rouge1': 93.77539618029996, 'rouge2': 78.03050568501187, 'rougeL': 64.82344374456906, 'rougeLsum': 93.76894400818286} | [0.9469183628254614, 0.9342162110956728, 0.9067374010427977, 0.8750430150656403] | 463.0182 | | 0.2961 | 7.0 | 1540 | 2.9796 | {'rouge1': 93.71888929189157, 'rouge2': 77.8377567936117, 'rougeL': 64.47906852741538, 'rougeLsum': 93.71298018429633} | [0.9470990193868204, 0.9341779145832757, 0.9064440397746264, 0.8744914403659334] | 463.0182 | ### Framework versions - PEFT 0.8.2.dev0 - Transformers 4.38.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.16.2.dev0 - Tokenizers 0.15.1
vitruv/vitruv_1
vitruv
2024-02-01T08:19:08Z
116
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "ko", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-01T07:48:27Z
--- license: apache-2.0 language: - ko --- Who we are : Virtruv ํ•ด๋‹น ๋ชจ๋ธ์€ ํ•œ๊ตญ์–ด ์ค‘ ์ˆ˜ํ•™ ๋ชจ๋ธ์— ์ง‘์ค‘ํ•˜์—ฌ ํ•™์Šต์„ ์‹œ๋„ํ•˜์˜€์Šต๋‹ˆ๋‹ค. Base Model : 'beomi/OPEN-SOLAR-KO-10.7B' Dataset : 1 . traintogpb/aihub-koen-translation-integrated-tiny-100k 2. kyujinpy/KOR-gugugu-platypus-set 3. GAIR/MathPile : ๋‹ค์Œ ๋ฐ์ดํ„ฐ ์…‹์„ sampling ํ•˜์—ฌ ์ง์ ‘ translate, ํ•˜์˜€์Šต๋‹ˆ๋‹ค. Prompt:
DrishtiSharma/llama2-7bb-tweet-summarization-gradnorm-0.3
DrishtiSharma
2024-02-01T08:17:42Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:dialogstudio", "base_model:NousResearch/Llama-2-7b-hf", "base_model:adapter:NousResearch/Llama-2-7b-hf", "region:us" ]
null
2024-02-01T08:16:54Z
--- library_name: peft tags: - trl - sft - generated_from_trainer datasets: - dialogstudio base_model: NousResearch/Llama-2-7b-hf model-index: - name: llama2-7bb-tweet-summarization-gradnorm-0.3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama2-7bb-tweet-summarization-gradnorm-0.3 This model is a fine-tuned version of [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) on the dialogstudio dataset. It achieves the following results on the evaluation set: - Loss: 2.8160 - Rouge Scores: {'rouge1': 93.719779910895, 'rouge2': 78.0799701185797, 'rougeL': 64.91384075272471, 'rougeLsum': 93.71249369436103} - Bleu Scores: [0.9468715981421053, 0.9340571158071639, 0.906767913949756, 0.8753561378232885] - Gen Len: 463.0182 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 7 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge Scores | Bleu Scores | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-----------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------:|:--------:| | 1.9246 | 1.0 | 220 | 1.8384 | {'rouge1': 92.78080137059182, 'rouge2': 78.71532643138437, 'rougeL': 68.0616149947273, 'rougeLsum': 92.78702835703021} | [0.9079275318266272, 0.8970741286020552, 0.8736002135507472, 0.8466150307526832] | 463.0182 | | 1.6564 | 2.0 | 440 | 1.8335 | {'rouge1': 93.62527163754612, 'rouge2': 79.14899366889107, 'rougeL': 68.02122989340602, 'rougeLsum': 93.62676386700348} | [0.9282164809556785, 0.9171615801879893, 0.892709310950969, 0.8645188775345913] | 463.0182 | | 1.3403 | 3.0 | 660 | 1.9481 | {'rouge1': 93.70688850262614, 'rouge2': 78.96026100012381, 'rougeL': 67.37638965440908, 'rougeLsum': 93.70399692691778} | [0.9342903619020663, 0.9225682522334384, 0.8972845918789121, 0.8681853449069523] | 463.0182 | | 0.9984 | 4.0 | 880 | 2.1537 | {'rouge1': 93.77800041953847, 'rouge2': 78.72204799373465, 'rougeL': 66.56763131340682, 'rougeLsum': 93.77100407824561} | [0.9425931953005738, 0.9302863494509406, 0.9040669212466305, 0.8739193334758137] | 463.0182 | | 0.7 | 5.0 | 1100 | 2.3692 | {'rouge1': 93.74639046979189, 'rouge2': 78.51569240275262, 'rougeL': 65.93032986525995, 'rougeLsum': 93.73745084400457} | [0.9440175755443134, 0.93171453625075, 0.9052208696375351, 0.8747208115562404] | 463.0182 | | 0.4947 | 6.0 | 1320 | 2.6590 | {'rouge1': 93.75661844384149, 'rouge2': 78.18805763398609, 'rougeL': 65.29243896759789, 'rougeLsum': 93.75034348574664} | [0.9470358425741272, 0.9342995624545122, 0.9070823690393129, 0.8757451333358709] | 463.0182 | | 0.3922 | 7.0 | 1540 | 2.8160 | {'rouge1': 93.719779910895, 'rouge2': 78.0799701185797, 'rougeL': 64.91384075272471, 'rougeLsum': 93.71249369436103} | [0.9468715981421053, 0.9340571158071639, 0.906767913949756, 0.8753561378232885] | 463.0182 | ### Framework versions - PEFT 0.8.2.dev0 - Transformers 4.38.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.16.2.dev0 - Tokenizers 0.15.1
Sacralet/dbw-bert-large-1.1
Sacralet
2024-02-01T08:16:00Z
2
0
transformers
[ "transformers", "safetensors", "bert", "fill-mask", "generated_from_trainer", "base_model:google-bert/bert-large-uncased", "base_model:finetune:google-bert/bert-large-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-02-01T02:57:28Z
--- license: apache-2.0 base_model: bert-large-uncased tags: - generated_from_trainer model-index: - name: dbw-bert-large-1.1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dbw-bert-large-1.1 This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0263 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.0616 | 0.09 | 100 | 0.0571 | | 0.0516 | 0.18 | 200 | 0.0438 | | 0.0417 | 0.27 | 300 | 0.0349 | | 0.0372 | 0.36 | 400 | 0.0319 | | 0.0317 | 0.44 | 500 | 0.0306 | | 0.0309 | 0.53 | 600 | 0.0294 | | 0.0287 | 0.62 | 700 | 0.0286 | | 0.029 | 0.71 | 800 | 0.0273 | | 0.0351 | 0.8 | 900 | 0.0265 | | 0.0252 | 0.89 | 1000 | 0.0262 | | 0.0226 | 0.98 | 1100 | 0.0263 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
cecb/newsfinetune
cecb
2024-02-01T08:14:58Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-01T08:14:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
agshubhi/Insurance_complaint_mgmt
agshubhi
2024-02-01T08:13:12Z
0
0
null
[ "dataset:ebrigham/NL_insurance_reviews_sentiment", "license:mit", "region:us" ]
null
2024-02-01T07:51:27Z
--- license: mit datasets: - ebrigham/NL_insurance_reviews_sentiment ---
Shreyas0706/Llama-2-7b-chat-hf-fine-tuned-adapters
Shreyas0706
2024-02-01T07:59:43Z
0
0
peft
[ "peft", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:adapter:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
2024-02-01T07:59:37Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-chat-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.8.2.dev0
LoneStriker/CapybaraHermes-2.5-Mistral-7B-3.0bpw-h6-exl2
LoneStriker
2024-02-01T07:58:06Z
8
0
trl
[ "trl", "safetensors", "mistral", "distilabel", "dpo", "rlaif", "rlhf", "en", "dataset:argilla/dpo-mix-7k", "license:apache-2.0", "region:us" ]
null
2024-02-01T07:56:25Z
--- library_name: trl license: apache-2.0 datasets: - argilla/dpo-mix-7k language: - en tags: - distilabel - dpo - rlaif - rlhf --- # CapybaraHermes-2.5-Mistral-7B <div> <img src="https://cdn-uploads.huggingface.co/production/uploads/60420dccc15e823a685f2b03/Vmr0FtTvnny6Snm-UDM_n.png"> </div> <p align="center"> <a href="https://github.com/argilla-io/distilabel"> <img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/> </a> </p> This model is the launching partner of the [capybara-dpo dataset](https://huggingface.co/datasets/argilla/distilabel-capybara-dpo-9k-binarized) build with โš—๏ธ distilabel. It's a preference tuned [OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B). CapybaraHermes has been preference tuned with LoRA and TRL for 3 epochs using argilla's [dpo mix 7k](https://huggingface.co/datasets/argilla/dpo-mix-7k). To test the impact on multi-turn performance we have used MTBench. We also include the Nous Benchmark results and Mistral-7B-Instruct-v0.2 for reference as it's a strong 7B model on MTBench: | Model | AGIEval | GPT4All | TruthfulQA | Bigbench | MTBench First Turn | MTBench Second Turn | Nous avg. | MTBench avg. | |-----------------------------------|---------|---------|------------|----------|------------|-------------|-----------|--------------| | argilla/CapybaraHermes-2.5-Mistral-7B | **43.8** | **73.35** | 57.07 | **42.44** | 8.24375 | **7.5625** | 54.16 | **7.903125** | | teknium/OpenHermes-2.5-Mistral-7B | 42.75 | 72.99 | 52.99 | 40.94 | **8.25** | 7.2875 | 52.42 | 7.76875 | | Mistral-7B-Instruct-v0.2 | 38.5 | 71.64 | **66.82** | 42.29 | 7.8375 | 7.1 | **54.81** | 7.46875 | The most interesting aspect in the context of the capybara-dpo dataset is the increased performance in MTBench Second Turn scores. For the merge lovers, we also preference tuned Beagle14-7B with a mix of capybara-dpo and distilabel orca pairs using the same recipe as NeuralBeagle (see [ YALL - Yet Another LLM Leaderboard](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard) for reference): | Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average| |------------------------------------------------------------------------------------------------------------------------------------|------:|------:|---------:|-------:|------:| |[DistilabelBeagle14-7B](https://huggingface.co/dvilasuero/DistilabelBeagle14-7B)| 45.29| 76.92| 71.66| 48.78| 60.66| ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** Argilla - **Shared by [optional]:** Argilla - **Model type:** 7B chat model - **Language(s) (NLP):** English - **License:** Same as OpenHermes - **Finetuned from model [optional]:** [OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)
franklee1015/ppo-LunarLander-v2
franklee1015
2024-02-01T07:58:04Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-01-29T03:40:37Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 298.43 +/- 12.85 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
mkdir700/v3-starcoderbase1b-personal-copilot-A100-40GB-colab
mkdir700
2024-02-01T07:53:07Z
184
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt_bigcode", "text-generation", "generated_from_trainer", "base_model:bigcode/starcoderbase-1b", "base_model:finetune:bigcode/starcoderbase-1b", "license:bigcode-openrail-m", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-01T06:24:51Z
--- license: bigcode-openrail-m tags: - generated_from_trainer base_model: bigcode/starcoderbase-1b model-index: - name: v3-starcoderbase1b-personal-copilot-A100-40GB-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # v3-starcoderbase1b-personal-copilot-A100-40GB-colab This model is a fine-tuned version of [bigcode/starcoderbase-1b](https://huggingface.co/bigcode/starcoderbase-1b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 30 - training_steps: 2000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.0 | 0.05 | 100 | nan | | 0.0 | 0.1 | 200 | nan | | 0.0 | 0.15 | 300 | nan | | 0.0 | 0.2 | 400 | nan | | 0.0 | 0.25 | 500 | nan | | 0.0 | 0.3 | 600 | nan | | 0.0 | 0.35 | 700 | nan | | 0.0 | 0.4 | 800 | nan | | 0.0 | 0.45 | 900 | nan | | 0.0 | 0.5 | 1000 | nan | | 0.0 | 0.55 | 1100 | nan | | 0.0 | 0.6 | 1200 | nan | | 0.0 | 0.65 | 1300 | nan | | 0.0 | 0.7 | 1400 | nan | | 0.0 | 0.75 | 1500 | nan | | 0.0 | 0.8 | 1600 | nan | | 0.0 | 0.85 | 1700 | nan | | 0.0 | 0.9 | 1800 | nan | | 0.0 | 0.95 | 1900 | nan | | 0.0 | 1.0 | 2000 | nan | ### Framework versions - Transformers 4.38.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
dhanikitkat/topic-class-indo-bank
dhanikitkat
2024-02-01T07:43:28Z
126
0
transformers
[ "transformers", "pytorch", "tf", "safetensors", "roberta", "text-classification", "indonesian-roberta-base-sentiment-classifier", "id", "dataset:indonlu", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-31T04:31:49Z
--- language: id tags: - indonesian-roberta-base-sentiment-classifier license: mit datasets: - indonlu widget: - text: "di update malah tidak bisa dibuka" ---
THUDM/LongAlign-13B-64k-base
THUDM
2024-02-01T07:31:38Z
21
3
transformers
[ "transformers", "pytorch", "llama", "text-generation", "Long Context", "en", "zh", "arxiv:2401.18058", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-29T13:37:47Z
--- language: - en - zh library_name: transformers tags: - Long Context - llama pipeline_tag: text-generation license: apache-2.0 --- # LongAlign-13B-64k-base <p align="center"> ๐Ÿค— <a href="https://huggingface.co/datasets/THUDM/LongAlign-10k" target="_blank">[LongAlign Dataset] </a> โ€ข ๐Ÿ’ป <a href="https://github.com/THUDM/LongAlign" target="_blank">[Github Repo]</a> โ€ข ๐Ÿ“ƒ <a href="https://arxiv.org/abs/2401.18058" target="_blank">[LongAlign Paper]</a> </p> **LongAlign** is the first full recipe for LLM alignment on long context. We propose the **LongAlign-10k** dataset, containing 10,000 long instruction data of 8k-64k in length. We investigate on trianing strategies, namely **packing (with loss weighting) and sorted batching**, which are all implemented in our code. For real-world long context evaluation, we introduce **LongBench-Chat** that evaluate the instruction-following capability on queries of 10k-100k length. ## All Models We open-sourced the following list of models: |Model|Huggingface Repo|Description| |---|---|---| |**LongAlign-6B-64k-base**| [๐Ÿค— Huggingface Repo](https://huggingface.co/THUDM/LongAlign-6B-64k-base) | **ChatGLM3-6B** with an extended 64k context window | |**LongAlign-6B-64k**| [๐Ÿค— Huggingface Repo](https://huggingface.co/THUDM/LongAlign-6B-64k) | Chat model by LongAlign training on LongAlign-6B-64k-base| |**LongAlign-7B-64k-base**| [๐Ÿค— Huggingface Repo](https://huggingface.co/THUDM/LongAlign-7B-64k-base) | **Llama-2-7B** with an extended 64k context window | |**LongAlign-7B-64k**| [๐Ÿค— Huggingface Repo](https://huggingface.co/THUDM/LongAlign-7B-64k) | Chat model by LongAlign training on LongAlign-7B-64k-base| |**LongAlign-13B-64k-base**| [๐Ÿค— Huggingface Repo](https://huggingface.co/THUDM/LongAlign-13B-64k-base) | **Llama-2-13B** with an extended 64k context window | |**LongAlign-13B-64k**| [๐Ÿค— Huggingface Repo](https://huggingface.co/THUDM/LongAlign-13B-64k) | Chat model by LongAlign training on LongAlign-13B-64k-base| |**ChatGLM3-6B-128k**| [๐Ÿค— Huggingface Repo](https://huggingface.co/THUDM/chatglm3-6b-128k) | **ChatGLM3-6B** with a 128k context window| ![](assets/leaderboard.png) ## Model usage Chat prompt template for LongAlign-6B-64k: ```text [Round 1] ้—ฎ๏ผšHi! ็ญ”๏ผšHello! What can I assist you today? [Round 2] ้—ฎ๏ผšWhat should I do if I can't sleep at night? ็ญ”๏ผš ``` Chat prompt template for LongAlign-7B-64k and LongAlign-13B-64k: ```text [INST]Hi![/INST]Hello! What can I assist you today? [INST]What should I do if I can't sleep at night?[/INST] ``` ChatGLM3-6B-128k uses the same prompt template as [ChatGLM3-6B](https://huggingface.co/THUDM/chatglm3-6b). A simple demo for deployment of the model: ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("THUDM/LongAlign-6B-64k", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("THUDM/LongAlign-6B-64k", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto") model = model.eval() query = open("assets/paper.txt").read() + "\n\nPlease summarize the paper." response, history = model.chat(tokenizer, query, history=[], max_new_tokens=512, temperature=1) print(response) ``` ## Citation If you find our work useful, please consider citing LongAlign: ``` ```
THUDM/LongAlign-13B-64k
THUDM
2024-02-01T07:31:22Z
22
13
transformers
[ "transformers", "pytorch", "llama", "text-generation", "Long Context", "en", "zh", "dataset:THUDM/LongAlign-10k", "arxiv:2401.18058", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-29T11:29:19Z
--- language: - en - zh library_name: transformers tags: - Long Context - llama datasets: - THUDM/LongAlign-10k pipeline_tag: text-generation license: apache-2.0 --- # LongAlign-13B-64k <p align="center"> ๐Ÿค— <a href="https://huggingface.co/datasets/THUDM/LongAlign-10k" target="_blank">[LongAlign Dataset] </a> โ€ข ๐Ÿ’ป <a href="https://github.com/THUDM/LongAlign" target="_blank">[Github Repo]</a> โ€ข ๐Ÿ“ƒ <a href="https://arxiv.org/abs/2401.18058" target="_blank">[LongAlign Paper]</a> </p> **LongAlign** is the first full recipe for LLM alignment on long context. We propose the **LongAlign-10k** dataset, containing 10,000 long instruction data of 8k-64k in length. We investigate on trianing strategies, namely **packing (with loss weighting) and sorted batching**, which are all implemented in our code. For real-world long context evaluation, we introduce **LongBench-Chat** that evaluate the instruction-following capability on queries of 10k-100k length. ## All Models We open-sourced the following list of models: |Model|Huggingface Repo|Description| |---|---|---| |**LongAlign-6B-64k-base**| [๐Ÿค— Huggingface Repo](https://huggingface.co/THUDM/LongAlign-6B-64k-base) | **ChatGLM3-6B** with an extended 64k context window | |**LongAlign-6B-64k**| [๐Ÿค— Huggingface Repo](https://huggingface.co/THUDM/LongAlign-6B-64k) | Chat model by LongAlign training on LongAlign-6B-64k-base| |**LongAlign-7B-64k-base**| [๐Ÿค— Huggingface Repo](https://huggingface.co/THUDM/LongAlign-7B-64k-base) | **Llama-2-7B** with an extended 64k context window | |**LongAlign-7B-64k**| [๐Ÿค— Huggingface Repo](https://huggingface.co/THUDM/LongAlign-7B-64k) | Chat model by LongAlign training on LongAlign-7B-64k-base| |**LongAlign-13B-64k-base**| [๐Ÿค— Huggingface Repo](https://huggingface.co/THUDM/LongAlign-13B-64k-base) | **Llama-2-13B** with an extended 64k context window | |**LongAlign-13B-64k**| [๐Ÿค— Huggingface Repo](https://huggingface.co/THUDM/LongAlign-13B-64k) | Chat model by LongAlign training on LongAlign-13B-64k-base| |**ChatGLM3-6B-128k**| [๐Ÿค— Huggingface Repo](https://huggingface.co/THUDM/chatglm3-6b-128k) | **ChatGLM3-6B** with a 128k context window| ![](assets/leaderboard.png) ## Model usage Chat prompt template for LongAlign-6B-64k: ```text [Round 1] ้—ฎ๏ผšHi! ็ญ”๏ผšHello! What can I assist you today? [Round 2] ้—ฎ๏ผšWhat should I do if I can't sleep at night? ็ญ”๏ผš ``` Chat prompt template for LongAlign-7B-64k and LongAlign-13B-64k: ```text [INST]Hi![/INST]Hello! What can I assist you today? [INST]What should I do if I can't sleep at night?[/INST] ``` ChatGLM3-6B-128k uses the same prompt template as [ChatGLM3-6B](https://huggingface.co/THUDM/chatglm3-6b). A simple demo for deployment of the model: ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("THUDM/LongAlign-6B-64k", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("THUDM/LongAlign-6B-64k", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto") model = model.eval() query = open("assets/paper.txt").read() + "\n\nPlease summarize the paper." response, history = model.chat(tokenizer, query, history=[], max_new_tokens=512, temperature=1) print(response) ``` ## Citation If you find our work useful, please consider citing LongAlign: ``` ```
THUDM/LongAlign-7B-64k
THUDM
2024-02-01T07:30:44Z
191
3
transformers
[ "transformers", "pytorch", "llama", "text-generation", "Long Context", "en", "zh", "dataset:THUDM/LongAlign-10k", "arxiv:2401.18058", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-29T09:06:16Z
--- language: - en - zh library_name: transformers tags: - Long Context - llama datasets: - THUDM/LongAlign-10k license: apache-2.0 --- # LongAlign-7B-64k <p align="center"> ๐Ÿค— <a href="https://huggingface.co/datasets/THUDM/LongAlign-10k" target="_blank">[LongAlign Dataset] </a> โ€ข ๐Ÿ’ป <a href="https://github.com/THUDM/LongAlign" target="_blank">[Github Repo]</a> โ€ข ๐Ÿ“ƒ <a href="https://arxiv.org/abs/2401.18058" target="_blank">[LongAlign Paper]</a> </p> **LongAlign** is the first full recipe for LLM alignment on long context. We propose the **LongAlign-10k** dataset, containing 10,000 long instruction data of 8k-64k in length. We investigate on trianing strategies, namely **packing (with loss weighting) and sorted batching**, which are all implemented in our code. For real-world long context evaluation, we introduce **LongBench-Chat** that evaluate the instruction-following capability on queries of 10k-100k length. ## All Models We open-sourced the following list of models: |Model|Huggingface Repo|Description| |---|---|---| |**LongAlign-6B-64k-base**| [๐Ÿค— Huggingface Repo](https://huggingface.co/THUDM/LongAlign-6B-64k-base) | **ChatGLM3-6B** with an extended 64k context window | |**LongAlign-6B-64k**| [๐Ÿค— Huggingface Repo](https://huggingface.co/THUDM/LongAlign-6B-64k) | Chat model by LongAlign training on LongAlign-6B-64k-base| |**LongAlign-7B-64k-base**| [๐Ÿค— Huggingface Repo](https://huggingface.co/THUDM/LongAlign-7B-64k-base) | **Llama-2-7B** with an extended 64k context window | |**LongAlign-7B-64k**| [๐Ÿค— Huggingface Repo](https://huggingface.co/THUDM/LongAlign-7B-64k) | Chat model by LongAlign training on LongAlign-7B-64k-base| |**LongAlign-13B-64k-base**| [๐Ÿค— Huggingface Repo](https://huggingface.co/THUDM/LongAlign-13B-64k-base) | **Llama-2-13B** with an extended 64k context window | |**LongAlign-13B-64k**| [๐Ÿค— Huggingface Repo](https://huggingface.co/THUDM/LongAlign-13B-64k) | Chat model by LongAlign training on LongAlign-13B-64k-base| |**ChatGLM3-6B-128k**| [๐Ÿค— Huggingface Repo](https://huggingface.co/THUDM/chatglm3-6b-128k) | **ChatGLM3-6B** with a 128k context window| ![](assets/leaderboard.png) ## Model usage Chat prompt template for LongAlign-6B-64k: ```text [Round 1] ้—ฎ๏ผšHi! ็ญ”๏ผšHello! What can I assist you today? [Round 2] ้—ฎ๏ผšWhat should I do if I can't sleep at night? ็ญ”๏ผš ``` Chat prompt template for LongAlign-7B-64k and LongAlign-13B-64k: ```text [INST]Hi![/INST]Hello! What can I assist you today? [INST]What should I do if I can't sleep at night?[/INST] ``` ChatGLM3-6B-128k uses the same prompt template as [ChatGLM3-6B](https://huggingface.co/THUDM/chatglm3-6b). A simple demo for deployment of the model: ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("THUDM/LongAlign-6B-64k", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("THUDM/LongAlign-6B-64k", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto") model = model.eval() query = open("assets/paper.txt").read() + "\n\nPlease summarize the paper." response, history = model.chat(tokenizer, query, history=[], max_new_tokens=512, temperature=1) print(response) ``` ## Citation If you find our work useful, please consider citing LongAlign: ``` ```
Sosnitskij/fialka-13B-v4-gguf
Sosnitskij
2024-02-01T07:29:25Z
16
1
null
[ "gguf", "text-generation", "ru", "en", "dataset:0x7194633/fialka-v3-data", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2024-01-30T23:00:13Z
--- license: apache-2.0 datasets: - 0x7194633/fialka-v3-data language: - ru - en pipeline_tag: text-generation --- Original model: https://huggingface.co/0x7194633/fialka-13B-v4 # Fialka v4.0 13B ![Violet](https://i.imgur.com/tQKuehR.png) ## Description Fialka language models are trained to follow instructions and maintain communication in Russian. The fourth version of Fialka is the third version optimized through RLHF. More responsive and more informative. ## Usage The model has a query format as in zephyr. ``` <|user|> ะงั‚ะพ ั‚ะฐะบะพะต ะผะตะผ?</s> <|assistant|> ะœะตะผ (ะฐะฝะณะป. meme) - ัั‚ะพ ะตะดะธะฝะธั†ะฐ ัะพั†ะธะฐะปัŒะฝะพะณะพ ะฟะพะฒะตะดะตะฝะธั, ะบะพั‚ะพั€ะฐั ะฑั‹ัั‚ั€ะพ ั€ะฐัะฟั€ะพัั‚ั€ะฐะฝัะตั‚ัั ะฒ ะธะฝั‚ะตั€ะฝะตั‚ะต ะธะปะธ ะฒ ัะพั†ะธะฐะปัŒะฝั‹ั… ัะตั‚ัั… ั ั†ะตะปัŒัŽ ะฟะตั€ะตะดะฐั‡ะธ ะธะฝั„ะพั€ะผะฐั†ะธะธ ะธ ะธะดะตะน. ะžะฑั‹ั‡ะฝะพ ะผะตะผั‹ ัะฒะปััŽั‚ัั ัˆัƒั‚ะบะฐะผะธ, ัั‚ะธัˆะบะฐะผะธ, ะธะทะพะฑั€ะฐะถะตะฝะธัะผะธ ะธะปะธ ะฒะธะดะตะพ ะธ ะธะผะตัŽั‚ ัŽะผะพั€ะธัั‚ะธั‡ะตัะบะธะน ะธะปะธ ัะฐั‚ะธั€ะธั‡ะตัะบะธะน ั…ะฐั€ะฐะบั‚ะตั€, ะฝะพ ะผะพะณัƒั‚ ัะพะดะตั€ะถะฐั‚ัŒ ะธ ะฑะพะปะตะต ัะตั€ัŒะตะทะฝั‹ะต ะธะดะตะธ, ั‚ะฐะบะธะต ะบะฐะบ ะฟะพะปะธั‚ะธั‡ะตัะบะธะต ะธะปะธ ัะพั†ะธะฐะปัŒะฝั‹ะต ะฟั€ะพั‚ะตัั‚ั‹, ะธ ะดะฐะถะต ัƒะณั€ะพะทั‹. ะœะตะผั‹ ะผะพะณัƒั‚ ัะปัƒะถะธั‚ัŒ ะดะปั ัะพะทะดะฐะฝะธั ะธ ั€ะฐัะฟั€ะพัั‚ั€ะฐะฝะตะฝะธั ะบะพะฝั‚ะตะฝั‚ะฐ ะธ ะธะฝั„ะพั€ะผะฐั†ะธะธ, ะฐ ั‚ะฐะบะถะต ะดะปั ะฒั‹ั€ะฐะถะตะฝะธั ะผะฝะตะฝะธั ะธะปะธ ั‡ัƒะฒัั‚ะฒ ะฐะฒั‚ะพั€ะฐ. ```
LoneStriker/DistilabelBeagle14-7B-5.0bpw-h6-exl2
LoneStriker
2024-02-01T07:28:19Z
16
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "dpo", "rlhf", "rlaif", "distilabel", "conversational", "arxiv:1910.09700", "base_model:mlabonne/Beagle14-7B", "base_model:finetune:mlabonne/Beagle14-7B", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-01T07:26:19Z
--- license: cc-by-nc-4.0 base_model: mlabonne/Beagle14-7B tags: - merge - mergekit - lazymergekit - dpo - rlhf - rlaif - distilabel --- # Model Card for Model ID This is a preference tuned version of `mlabonne/Beagle14-7B` using a mix of Argilla's orca pairs and a new upcoming multi-turn dpo dataset. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** Argilla - **Model type:** [More Information Needed] - **Language(s) (NLP):** English - **License:** cc-by-nc-4.0 - **Finetuned from model [optional]:** mlabonne/Beagle14-7B ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bachbouch/w-3-qlora-full
bachbouch
2024-02-01T07:27:31Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-01T06:58:41Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
medinamanuel/q-FrozenLake-v1-4x4-noSlippery
medinamanuel
2024-02-01T07:17:06Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-02-01T07:17:01Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="medinamanuel/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Grekkla/HubbleSpacePhotograph
Grekkla
2024-02-01T07:13:17Z
1
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:unknown", "region:us" ]
text-to-image
2024-02-01T07:11:53Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: Ph parameters: negative_prompt: Ph output: url: images/Adsฤฑz.png base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: Hubble Space Photograph license: unknown --- # Hubble Space Photograph <Gallery /> ## Trigger words You should use `Hubble Space Photograph` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/Grekkla/HubbleSpacePhotograph/tree/main) them in the Files & versions tab.
LanguageBind/MoE-LLaVA-Phi2-2.7B-4e
LanguageBind
2024-02-01T07:10:04Z
389
38
transformers
[ "transformers", "safetensors", "moe_llava_phi", "text-generation", "custom_code", "arxiv:2401.15947", "arxiv:2311.10122", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-01-23T06:50:28Z
--- license: apache-2.0 --- <p align="center"> <img src="https://s11.ax1x.com/2023/12/28/piqvDMV.png" width="250" style="margin-bottom: 0.2;"/> <p> <h2 align="center"> <a href="https://arxiv.org/abs/2401.15947">MoE-LLaVA: Mixture of Experts for Large Vision-Language Models</a></h2> <h5 align="center"> If you like our project, please give us a star โญ on GitHub for latest update. </h2> <h5 align="center"> </h5> ## ๐Ÿ“ฐ News * **[2024.01.30]** The [paper](https://arxiv.org/abs/2401.15947) is released. * **[2024.01.27]** ๐Ÿค—[Hugging Face demo](https://huggingface.co/spaces/LanguageBind/MoE-LLaVA) and **all codes & datasets** are available now! Welcome to **watch** ๐Ÿ‘€ this repository for the latest updates. ## ๐Ÿ˜ฎ Highlights MoE-LLaVA shows excellent performance in multi-modal learning. ### ๐Ÿ”ฅ High performance, but with fewer parameters - with just **3B sparsely activated parameters**, MoE-LLaVA demonstrates performance comparable to the LLaVA-1.5-7B on various visual understanding datasets and even surpasses the LLaVA-1.5-13B in object hallucination benchmarks. ### ๐Ÿš€ Simple baseline, learning multi-modal interactions with sparse pathways. - With the addition of **a simple MoE tuning stage**, we can complete the training of MoE-LLaVA on **8 V100 GPUs** within 2 days. ## ๐Ÿค— Demo ### Gradio Web UI Highly recommend trying out our web demo by the following command, which incorporates all features currently supported by MoE-LLaVA. We also provide [online demo](https://huggingface.co/spaces/LanguageBind/MoE-LLaVA) in Huggingface Spaces. ```bash # use phi2 deepspeed --include localhost:0 moellava/serve/gradio_web_server.py --model-path "LanguageBind/MoE-LLaVA-Phi2-2.7B-4e" # use qwen deepspeed --include localhost:0 moellava/serve/gradio_web_server.py --model-path "LanguageBind/MoE-LLaVA-Qwen-1.8B-4e" # use stablelm deepspeed --include localhost:0 moellava/serve/gradio_web_server.py --model-path "LanguageBind/MoE-LLaVA-StableLM-1.6B-4e" ``` ### CLI Inference ```bash # use phi2 deepspeed --include localhost:0 moellava/serve/cli.py --model-path "LanguageBind/MoE-LLaVA-Phi2-2.7B-4e" --image-file "image.jpg" # use qwen deepspeed --include localhost:0 moellava/serve/cli.py --model-path "LanguageBind/MoE-LLaVA-Qwen-1.8B-4e" --image-file "image.jpg" # use stablelm deepspeed --include localhost:0 moellava/serve/cli.py --model-path "LanguageBind/MoE-LLaVA-StableLM-1.6B-4e" --image-file "image.jpg" ``` ## ๐Ÿณ Model Zoo | Model | LLM | Checkpoint | Avg | VQAv2 | GQA | VizWiz | SQA | T-VQA | POPE | MM-Bench| LLaVA-Bench-Wild | MM-Vet | |----------|-----------|-----------|---|---|---|---|---|---|---|---|---|---| | MoE-LLaVA-1.6Bร—4-Top2 | 1.6B | [LanguageBind/MoE-LLaVA-StableLM-1.6B-4e](https://huggingface.co/LanguageBind/MoE-LLaVA-StableLM-1.6B-4e) | 60.0 | 76.0 | 60.4 | 37.2 | 62.6 | 47.8 | 84.3 | 59.4 | 85.9 | 26.1 | | MoE-LLaVA-1.8Bร—4-Top2 | 1.8B | [LanguageBind/MoE-LLaVA-Qwen-1.8B-4e](https://huggingface.co/LanguageBind/MoE-LLaVA-Qwen-1.8B-4e) | 60.2 | 76.2 | 61.5 | 32.6 | 63.1 | 48.0 | 87.0 | 59.6 | 88.7 | 25.3 | | MoE-LLaVA-2.7Bร—4-Top2 | 2.7B | [LanguageBind/MoE-LLaVA-Phi2-2.7B-4e](https://huggingface.co/LanguageBind/MoE-LLaVA-Phi2-2.7B-4e) | 63.9 | 77.1 | 61.1 | 43.4 | 68.7 | 50.2 | 85.0 | 65.5 | 93.2 | 31.1 | <!-- | LLaVA-1.5 | 7B | [liuhaotian/llava-v1.5-7b](https://huggingface.co/liuhaotian/llava-v1.5-7b) | 62.0 | 78.5 | 62.0 | 50.0 | 66.8 | 58.2 | 85.9 | 64.3 | 31.1 | | LLaVA-1.5 | 13B | [liuhaotian/llava-v1.5-13b](https://huggingface.co/liuhaotian/llava-v1.5-13b) | 64.9 | 80.0 | 63.3 | 53.6 | 71.6 | 61.3 | 85.9 | 67.7 | 36.1 | --> ## โš™๏ธ Requirements and Installation * Python >= 3.10 * Pytorch == 2.0.1 * CUDA Version >= 11.7 * **Transformers == 4.36.2** * **Tokenizers==0.15.1** * Install required packages: ```bash git clone https://github.com/PKU-YuanGroup/MoE-LLaVA cd MoE-LLaVA conda create -n moellava python=3.10 -y conda activate moellava pip install --upgrade pip # enable PEP 660 support pip install -e . pip install -e ".[train]" pip install flash-attn --no-build-isolation # Below are optional. For Qwen model. git clone https://github.com/Dao-AILab/flash-attention cd flash-attention && pip install . # Below are optional. Installing them might be slow. # pip install csrc/layer_norm # If the version of flash-attn is higher than 2.1.1, the following is not needed. # pip install csrc/rotary ``` ## ๐Ÿ—๏ธ Training & Validating The training & validating instruction is in [TRAIN.md](docs/TRAIN.md) & [EVAL.md](docs/EVAL.md). ## ๐Ÿ’ก Customizing your MoE-LLaVA The instruction is in [CUSTOM.md](docs/CUSTOM.md). ## ๐Ÿ˜ Visualization The instruction is in [VISUALIZATION.md](docs/VISUALIZATION.md). ## ๐Ÿค– API **We open source all codes.** If you want to load the model (e.g. ```LanguageBind/MoE-LLaVA```) on local, you can use the following code snippets. **Using the following command to run the code.** ```bash deepspeed predict.py ``` ```python import torch from moellava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN from moellava.conversation import conv_templates, SeparatorStyle from moellava.model.builder import load_pretrained_model from moellava.utils import disable_torch_init from moellava.mm_utils import tokenizer_image_token, get_model_name_from_path, KeywordsStoppingCriteria def main(): disable_torch_init() image = 'moellava/serve/examples/extreme_ironing.jpg' inp = 'What is unusual about this image?' model_path = 'LanguageBind/MoE-LLaVA-Phi2-2.7B-4e' # LanguageBind/MoE-LLaVA-Qwen-1.8B-4e or LanguageBind/MoE-LLaVA-StableLM-1.6B-4e device = 'cuda' load_4bit, load_8bit = False, False # FIXME: Deepspeed support 4bit or 8bit? model_name = get_model_name_from_path(model_path) tokenizer, model, processor, context_len = load_pretrained_model(model_path, None, model_name, load_8bit, load_4bit, device=device) image_processor = processor['image'] conv_mode = "phi" # qwen or stablelm conv = conv_templates[conv_mode].copy() roles = conv.roles image_tensor = image_processor.preprocess(image, return_tensors='pt')['pixel_values'].to(model.device, dtype=torch.float16) print(f"{roles[1]}: {inp}") inp = DEFAULT_IMAGE_TOKEN + '\n' + inp conv.append_message(conv.roles[0], inp) conv.append_message(conv.roles[1], None) prompt = conv.get_prompt() input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt').unsqueeze(0).cuda() stop_str = conv.sep if conv.sep_style != SeparatorStyle.TWO else conv.sep2 keywords = [stop_str] stopping_criteria = KeywordsStoppingCriteria(keywords, tokenizer, input_ids) with torch.inference_mode(): output_ids = model.generate( input_ids, images=image_tensor, do_sample=True, temperature=0.2, max_new_tokens=1024, use_cache=True, stopping_criteria=[stopping_criteria]) outputs = tokenizer.decode(output_ids[0, input_ids.shape[1]:], skip_special_tokens=True).strip() print(outputs) if __name__ == '__main__': main() ``` ## ๐Ÿ™Œ Related Projects * [Video-LLaVA](https://github.com/PKU-YuanGroup/Video-LLaVA) This framework empowers the model to efficiently utilize the united visual tokens. * [LanguageBind](https://github.com/PKU-YuanGroup/LanguageBind) An open source five modalities language-based retrieval framework. ## ๐Ÿ‘ Acknowledgement * [LLaVA](https://github.com/haotian-liu/LLaVA) The codebase we built upon and it is an efficient large language and vision assistant. ## ๐Ÿ”’ License * The majority of this project is released under the Apache 2.0 license as found in the [LICENSE](https://github.com/PKU-YuanGroup/MoE-LLaVA/blob/main/LICENSE) file. * The service is a research preview intended for non-commercial use only, subject to the model [License](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) of LLaMA, [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI, and [Privacy Practices](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb) of ShareGPT. Please contact us if you find any potential violation. ## โœ๏ธ Citation If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil:. ```BibTeX @misc{lin2024moellava, title={MoE-LLaVA: Mixture of Experts for Large Vision-Language Models}, author={Bin Lin and Zhenyu Tang and Yang Ye and Jiaxi Cui and Bin Zhu and Peng Jin and Junwu Zhang and Munan Ning and Li Yuan}, year={2024}, eprint={2401.15947}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```BibTeX @article{lin2023video, title={Video-LLaVA: Learning United Visual Representation by Alignment Before Projection}, author={Lin, Bin and Zhu, Bin and Ye, Yang and Ning, Munan and Jin, Peng and Yuan, Li}, journal={arXiv preprint arXiv:2311.10122}, year={2023} } ``` ## โœจ Star History [![Star History](https://api.star-history.com/svg?repos=PKU-YuanGroup/MoE-LLaVA&type=Date)](https://star-history.com/#PKU-YuanGroup/MoE-LLaVA&Date) ## ๐Ÿค Contributors <a href="https://github.com/PKU-YuanGroup/MoE-LLaVA/graphs/contributors"> <img src="https://contrib.rocks/image?repo=PKU-YuanGroup/MoE-LLaVA" /> </a>
LanguageBind/LanguageBind_Video_V1.5_FT
LanguageBind
2024-02-01T06:58:17Z
1,905
4
transformers
[ "transformers", "pytorch", "LanguageBindVideo", "zero-shot-image-classification", "arxiv:2310.01852", "license:mit", "endpoints_compatible", "region:us" ]
zero-shot-image-classification
2023-11-26T13:35:39Z
--- license: mit --- <p align="center"> <img src="https://s11.ax1x.com/2024/02/01/pFMDAm9.png" width="250" style="margin-bottom: 0.2;"/> <p> <h2 align="center"> <a href="https://arxiv.org/pdf/2310.01852.pdf">ใ€ICLR 2024 ๐Ÿ”ฅใ€‘LanguageBind: Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment</a></h2> <h5 align="center"> If you like our project, please give us a star โญ on GitHub for latest update. </h2> ## ๐Ÿ“ฐ News * **[2024.01.27]** ๐Ÿ‘€๐Ÿ‘€๐Ÿ‘€ Our [MoE-LLaVA](https://github.com/PKU-YuanGroup/MoE-LLaVA) is released! A sparse model with 3B parameters outperformed the dense model with 7B parameters. * **[2024.01.16]** ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ Our LanguageBind has been accepted at ICLR 2024! We earn the score of 6(3)8(6)6(6)6(6) [here](https://openreview.net/forum?id=QmZKc7UZCy&noteId=OgsxQxAleA). * **[2023.12.15]** ๐Ÿ’ช๐Ÿ’ช๐Ÿ’ช We expand the ๐Ÿ’ฅ๐Ÿ’ฅ๐Ÿ’ฅ VIDAL dataset and now have **10M video-text data**. We launch **LanguageBind_Video 1.5**, checking our [model zoo](#-model-zoo). * **[2023.12.10]** We expand the ๐Ÿ’ฅ๐Ÿ’ฅ๐Ÿ’ฅ VIDAL dataset and now have **10M depth and 10M thermal data**. We are in the process of uploading thermal and depth data on [Hugging Face](https://huggingface.co/datasets/LanguageBind/VIDAL-Depth-Thermal) and expect the whole process to last 1-2 months. * **[2023.11.27]** ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ We have updated our [paper](https://arxiv.org/abs/2310.01852) with emergency zero-shot results., checking our โœจ [results](#emergency-results). * **[2023.11.26]** ๐Ÿ’ฅ๐Ÿ’ฅ๐Ÿ’ฅ We have open-sourced all textual sources and corresponding YouTube IDs [here](DATASETS.md). * **[2023.11.26]** ๐Ÿ“ฃ๐Ÿ“ฃ๐Ÿ“ฃ We have open-sourced fully fine-tuned **Video & Audio**, achieving improved performance once again, checking our [model zoo](#-model-zoo). * **[2023.11.22]** We are about to release a fully fine-tuned version, and the **HUGE** version is currently undergoing training. * **[2023.11.21]** ๐Ÿ’ฅ We are releasing sample data in [DATASETS.md](DATASETS.md) so that individuals who are interested can further modify the code to train it on their own data. * **[2023.11.20]** ๐Ÿš€๐Ÿš€๐Ÿš€ [Video-LLaVA](https://github.com/PKU-YuanGroup/Video-LLaVA) builds a large visual-language model to achieve ๐ŸŽ‰SOTA performances based on LanguageBind encoders. * **[2023.10.23]** ๐ŸŽถ LanguageBind-Audio achieves ๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰**state-of-the-art (SOTA) performance on 5 datasets**, checking our โœจ [results](#multiple-modalities)! * **[2023.10.14]** ๐Ÿ˜ฑ Released a stronger LanguageBind-Video, checking our โœจ [results](#video-language)! The video checkpoint **have updated** on Huggingface Model Hub! * **[2023.10.10]** We provide sample data, which can be found in [assets](assets), and [emergency zero-shot usage](#emergency-zero-shot) is described. * **[2023.10.07]** The checkpoints are available on ๐Ÿค— [Huggingface Model](https://huggingface.co/LanguageBind). * **[2023.10.04]** Code and [demo](https://huggingface.co/spaces/LanguageBind/LanguageBind) are available now! Welcome to **watch** ๐Ÿ‘€ this repository for the latest updates. ## ๐Ÿ˜ฎ Highlights ### ๐Ÿ’ก High performance, but NO intermediate modality required LanguageBind is a **language-centric** multimodal pretraining approach, **taking the language as the bind across different modalities** because the language modality is well-explored and contains rich semantics. * The following first figure shows the architecture of LanguageBind. LanguageBind can be easily extended to segmentation, detection tasks, and potentially to unlimited modalities. ### โšก๏ธ A multimodal, fully aligned and voluminous dataset We propose **VIDAL-10M**, **10 Million data** with **V**ideo, **I**nfrared, **D**epth, **A**udio and their corresponding **L**anguage, which greatly expands the data beyond visual modalities. * The second figure shows our proposed VIDAL-10M dataset, which includes five modalities: video, infrared, depth, audio, and language. ### ๐Ÿ”ฅ Multi-view enhanced description for training We make multi-view enhancements to language. We produce multi-view description that combines **meta-data**, **spatial**, and **temporal** to greatly enhance the semantic information of the language. In addition we further **enhance the language with ChatGPT** to create a good semantic space for each modality aligned language. ## ๐Ÿค— Demo * **Local demo.** Highly recommend trying out our web demo, which incorporates all features currently supported by LanguageBind. ```bash python gradio_app.py ``` * **Online demo.** We provide the [online demo](https://huggingface.co/spaces/LanguageBind/LanguageBind) in Huggingface Spaces. In this demo, you can calculate the similarity of modalities to language, such as audio-to-language, video-to-language, and depth-to-image. ## ๐Ÿ› ๏ธ Requirements and Installation * Python >= 3.8 * Pytorch >= 1.13.1 * CUDA Version >= 11.6 * Install required packages: ```bash git clone https://github.com/PKU-YuanGroup/LanguageBind cd LanguageBind pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu116 pip install -r requirements.txt ``` ## ๐Ÿณ Model Zoo The names in the table represent different encoder models. For example, `LanguageBind/LanguageBind_Video_FT` represents the fully fine-tuned version, while `LanguageBind/LanguageBind_Video` represents the LoRA-tuned version. You can freely replace them in the recommended [API usage](#-api). We recommend using the fully fine-tuned version, as it offers stronger performance. <div align="center"> <table border="1" width="100%"> <tr align="center"> <th>Modality</th><th>LoRA tuning</th><th>Fine-tuning</th> </tr> <tr align="center"> <td>Video</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video">LanguageBind_Video</a></td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video_FT">LanguageBind_Video_FT</a></td> </tr> <tr align="center"> <td>Audio</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Audio">LanguageBind_Audio</a></td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Audio_FT">LanguageBind_Audio_FT</a></td> </tr> <tr align="center"> <td>Depth</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Depth">LanguageBind_Depth</a></td><td>-</td> </tr> <tr align="center"> <td>Thermal</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Thermal">LanguageBind_Thermal</a></td><td>-</td> </tr> </table> </div> <div align="center"> <table border="1" width="100%"> <tr align="center"> <th>Version</th><th>Tuning</th><th>Model size</th><th>Num_frames</th><th>HF Link</th><th>MSR-VTT</th><th>DiDeMo</th><th>ActivityNet</th><th>MSVD</th> </tr> <tr align="center"> <td>LanguageBind_Video</td><td>LoRA</td><td>Large</td><td>8</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video">Link</a></td><td>42.6</td><td>37.8</td><td>35.1</td><td>52.2</td> </tr> <tr align="center"> <td>LanguageBind_Video_FT</td><td>Full-tuning</td><td>Large</td><td>8</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video_FT">Link</a></td><td>42.7</td><td>38.1</td><td>36.9</td><td>53.5</td> </tr> <tr align="center"> <td>LanguageBind_Video_V1.5_FT</td><td>Full-tuning</td><td>Large</td><td>8</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video_V1.5_FT">Link</a></td><td>42.8</td><td>39.7</td><td>38.4</td><td>54.1</td> </tr> <tr align="center"> <td>LanguageBind_Video_V1.5_FT</td><td>Full-tuning</td><td>Large</td><td>12</td><td>Coming soon</td> </tr> <tr align="center"> <td>LanguageBind_Video_Huge_V1.5_FT</td><td>Full-tuning</td><td>Huge</td><td>8</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video_Huge_V1.5_FT">Link</a></td><td>44.8</td><td>39.9</td><td>41.0</td><td>53.7</td> </tr> <tr align="center"> <td>LanguageBind_Video_Huge_V1.5_FT</td><td>Full-tuning</td><td>Huge</td><td>12</td><td>Coming soon</td> </tr> </table> </div> ## ๐Ÿค– API **We open source all modalities preprocessing code.** If you want to load the model (e.g. ```LanguageBind/LanguageBind_Thermal```) from the model hub on Huggingface or on local, you can use the following code snippets! ### Inference for Multi-modal Binding We have provided some sample datasets in [assets](assets) to quickly see how languagebind works. ```python import torch from languagebind import LanguageBind, to_device, transform_dict, LanguageBindImageTokenizer if __name__ == '__main__': device = 'cuda:0' device = torch.device(device) clip_type = { 'video': 'LanguageBind_Video_FT', # also LanguageBind_Video 'audio': 'LanguageBind_Audio_FT', # also LanguageBind_Audio 'thermal': 'LanguageBind_Thermal', 'image': 'LanguageBind_Image', 'depth': 'LanguageBind_Depth', } model = LanguageBind(clip_type=clip_type, cache_dir='./cache_dir') model = model.to(device) model.eval() pretrained_ckpt = f'lb203/LanguageBind_Image' tokenizer = LanguageBindImageTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir/tokenizer_cache_dir') modality_transform = {c: transform_dict[c](model.modality_config[c]) for c in clip_type.keys()} image = ['assets/image/0.jpg', 'assets/image/1.jpg'] audio = ['assets/audio/0.wav', 'assets/audio/1.wav'] video = ['assets/video/0.mp4', 'assets/video/1.mp4'] depth = ['assets/depth/0.png', 'assets/depth/1.png'] thermal = ['assets/thermal/0.jpg', 'assets/thermal/1.jpg'] language = ["Training a parakeet to climb up a ladder.", 'A lion climbing a tree to catch a monkey.'] inputs = { 'image': to_device(modality_transform['image'](image), device), 'video': to_device(modality_transform['video'](video), device), 'audio': to_device(modality_transform['audio'](audio), device), 'depth': to_device(modality_transform['depth'](depth), device), 'thermal': to_device(modality_transform['thermal'](thermal), device), } inputs['language'] = to_device(tokenizer(language, max_length=77, padding='max_length', truncation=True, return_tensors='pt'), device) with torch.no_grad(): embeddings = model(inputs) print("Video x Text: \n", torch.softmax(embeddings['video'] @ embeddings['language'].T, dim=-1).detach().cpu().numpy()) print("Image x Text: \n", torch.softmax(embeddings['image'] @ embeddings['language'].T, dim=-1).detach().cpu().numpy()) print("Depth x Text: \n", torch.softmax(embeddings['depth'] @ embeddings['language'].T, dim=-1).detach().cpu().numpy()) print("Audio x Text: \n", torch.softmax(embeddings['audio'] @ embeddings['language'].T, dim=-1).detach().cpu().numpy()) print("Thermal x Text: \n", torch.softmax(embeddings['thermal'] @ embeddings['language'].T, dim=-1).detach().cpu().numpy()) ``` Then returns the following result. ```bash Video x Text: [[9.9989331e-01 1.0667283e-04] [1.3255903e-03 9.9867439e-01]] Image x Text: [[9.9990666e-01 9.3292067e-05] [4.6132666e-08 1.0000000e+00]] Depth x Text: [[0.9954276 0.00457235] [0.12042473 0.8795753 ]] Audio x Text: [[0.97634876 0.02365119] [0.02917843 0.97082156]] Thermal x Text: [[0.9482511 0.0517489 ] [0.48746133 0.5125386 ]] ``` ### Emergency zero-shot Since languagebind binds each modality together, we also found the **emergency zero-shot**. It's very simple to use. ```python print("Video x Audio: \n", torch.softmax(embeddings['video'] @ embeddings['audio'].T, dim=-1).detach().cpu().numpy()) print("Image x Depth: \n", torch.softmax(embeddings['image'] @ embeddings['depth'].T, dim=-1).detach().cpu().numpy()) print("Image x Thermal: \n", torch.softmax(embeddings['image'] @ embeddings['thermal'].T, dim=-1).detach().cpu().numpy()) ``` Then, you will get: ``` Video x Audio: [[1.0000000e+00 0.0000000e+00] [3.1150486e-32 1.0000000e+00]] Image x Depth: [[1. 0.] [0. 1.]] Image x Thermal: [[1. 0.] [0. 1.]] ``` ### Different branches for X-Language task Additionally, LanguageBind can be **disassembled into different branches** to handle different tasks. Note that we do not train Image, which just initialize from OpenCLIP. #### Thermal ```python import torch from languagebind import LanguageBindThermal, LanguageBindThermalTokenizer, LanguageBindThermalProcessor pretrained_ckpt = 'LanguageBind/LanguageBind_Thermal' model = LanguageBindThermal.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') tokenizer = LanguageBindThermalTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') thermal_process = LanguageBindThermalProcessor(model.config, tokenizer) model.eval() data = thermal_process([r"your/thermal.jpg"], ['your text'], return_tensors='pt') with torch.no_grad(): out = model(**data) print(out.text_embeds @ out.image_embeds.T) ``` #### Depth ```python import torch from languagebind import LanguageBindDepth, LanguageBindDepthTokenizer, LanguageBindDepthProcessor pretrained_ckpt = 'LanguageBind/LanguageBind_Depth' model = LanguageBindDepth.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') tokenizer = LanguageBindDepthTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') depth_process = LanguageBindDepthProcessor(model.config, tokenizer) model.eval() data = depth_process([r"your/depth.png"], ['your text.'], return_tensors='pt') with torch.no_grad(): out = model(**data) print(out.text_embeds @ out.image_embeds.T) ``` #### Video ```python import torch from languagebind import LanguageBindVideo, LanguageBindVideoTokenizer, LanguageBindVideoProcessor pretrained_ckpt = 'LanguageBind/LanguageBind_Video_FT' # also 'LanguageBind/LanguageBind_Video' model = LanguageBindVideo.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') tokenizer = LanguageBindVideoTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') video_process = LanguageBindVideoProcessor(model.config, tokenizer) model.eval() data = video_process(["your/video.mp4"], ['your text.'], return_tensors='pt') with torch.no_grad(): out = model(**data) print(out.text_embeds @ out.image_embeds.T) ``` #### Audio ```python import torch from languagebind import LanguageBindAudio, LanguageBindAudioTokenizer, LanguageBindAudioProcessor pretrained_ckpt = 'LanguageBind/LanguageBind_Audio_FT' # also 'LanguageBind/LanguageBind_Audio' model = LanguageBindAudio.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') tokenizer = LanguageBindAudioTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') audio_process = LanguageBindAudioProcessor(model.config, tokenizer) model.eval() data = audio_process([r"your/audio.wav"], ['your audio.'], return_tensors='pt') with torch.no_grad(): out = model(**data) print(out.text_embeds @ out.image_embeds.T) ``` #### Image Note that our image encoder is the same as OpenCLIP. **Not** as fine-tuned as other modalities. ```python import torch from languagebind import LanguageBindImage, LanguageBindImageTokenizer, LanguageBindImageProcessor pretrained_ckpt = 'LanguageBind/LanguageBind_Image' model = LanguageBindImage.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') tokenizer = LanguageBindImageTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') image_process = LanguageBindImageProcessor(model.config, tokenizer) model.eval() data = image_process([r"your/image.jpg"], ['your text.'], return_tensors='pt') with torch.no_grad(): out = model(**data) print(out.text_embeds @ out.image_embeds.T) ``` ## ๐Ÿ’ฅ VIDAL-10M The datasets is in [DATASETS.md](DATASETS.md). ## ๐Ÿ—๏ธ Training & Validating The training & validating instruction is in [TRAIN_AND_VALIDATE.md](TRAIN_AND_VALIDATE.md). ## ๐Ÿ‘ Acknowledgement * [OpenCLIP](https://github.com/mlfoundations/open_clip) An open source pretraining framework. * [CLIP4Clip](https://github.com/ArrowLuo/CLIP4Clip) An open source Video-Text retrieval framework. * [sRGB-TIR](https://github.com/rpmsnu/sRGB-TIR) An open source framework to generate infrared (thermal) images. * [GLPN](https://github.com/vinvino02/GLPDepth) An open source framework to generate depth images. ## ๐Ÿ”’ License * The majority of this project is released under the MIT license as found in the [LICENSE](https://github.com/PKU-YuanGroup/LanguageBind/blob/main/LICENSE) file. * The dataset of this project is released under the CC-BY-NC 4.0 license as found in the [DATASET_LICENSE](https://github.com/PKU-YuanGroup/LanguageBind/blob/main/DATASET_LICENSE) file. ## โœ๏ธ Citation If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil:. ```BibTeX @misc{zhu2023languagebind, title={LanguageBind: Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment}, author={Bin Zhu and Bin Lin and Munan Ning and Yang Yan and Jiaxi Cui and Wang HongFa and Yatian Pang and Wenhao Jiang and Junwu Zhang and Zongwei Li and Cai Wan Zhang and Zhifeng Li and Wei Liu and Li Yuan}, year={2023}, eprint={2310.01852}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ## โœจ Star History [![Star History](https://api.star-history.com/svg?repos=PKU-YuanGroup/LanguageBind&type=Date)](https://star-history.com/#PKU-YuanGroup/LanguageBind&Date) ## ๐Ÿค Contributors <a href="https://github.com/PKU-YuanGroup/LanguageBind/graphs/contributors"> <img src="https://contrib.rocks/image?repo=PKU-YuanGroup/LanguageBind" /> </a>
LanguageBind/LanguageBind_Audio_V1.5
LanguageBind
2024-02-01T06:58:03Z
0
1
null
[ "arxiv:2310.01852", "license:mit", "region:us" ]
null
2023-11-26T13:36:02Z
--- license: mit --- <p align="center"> <img src="https://s11.ax1x.com/2024/02/01/pFMDAm9.png" width="250" style="margin-bottom: 0.2;"/> <p> <h2 align="center"> <a href="https://arxiv.org/pdf/2310.01852.pdf">ใ€ICLR 2024 ๐Ÿ”ฅใ€‘LanguageBind: Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment</a></h2> <h5 align="center"> If you like our project, please give us a star โญ on GitHub for latest update. </h2> ## ๐Ÿ“ฐ News * **[2024.01.27]** ๐Ÿ‘€๐Ÿ‘€๐Ÿ‘€ Our [MoE-LLaVA](https://github.com/PKU-YuanGroup/MoE-LLaVA) is released! A sparse model with 3B parameters outperformed the dense model with 7B parameters. * **[2024.01.16]** ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ Our LanguageBind has been accepted at ICLR 2024! We earn the score of 6(3)8(6)6(6)6(6) [here](https://openreview.net/forum?id=QmZKc7UZCy&noteId=OgsxQxAleA). * **[2023.12.15]** ๐Ÿ’ช๐Ÿ’ช๐Ÿ’ช We expand the ๐Ÿ’ฅ๐Ÿ’ฅ๐Ÿ’ฅ VIDAL dataset and now have **10M video-text data**. We launch **LanguageBind_Video 1.5**, checking our [model zoo](#-model-zoo). * **[2023.12.10]** We expand the ๐Ÿ’ฅ๐Ÿ’ฅ๐Ÿ’ฅ VIDAL dataset and now have **10M depth and 10M thermal data**. We are in the process of uploading thermal and depth data on [Hugging Face](https://huggingface.co/datasets/LanguageBind/VIDAL-Depth-Thermal) and expect the whole process to last 1-2 months. * **[2023.11.27]** ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ We have updated our [paper](https://arxiv.org/abs/2310.01852) with emergency zero-shot results., checking our โœจ [results](#emergency-results). * **[2023.11.26]** ๐Ÿ’ฅ๐Ÿ’ฅ๐Ÿ’ฅ We have open-sourced all textual sources and corresponding YouTube IDs [here](DATASETS.md). * **[2023.11.26]** ๐Ÿ“ฃ๐Ÿ“ฃ๐Ÿ“ฃ We have open-sourced fully fine-tuned **Video & Audio**, achieving improved performance once again, checking our [model zoo](#-model-zoo). * **[2023.11.22]** We are about to release a fully fine-tuned version, and the **HUGE** version is currently undergoing training. * **[2023.11.21]** ๐Ÿ’ฅ We are releasing sample data in [DATASETS.md](DATASETS.md) so that individuals who are interested can further modify the code to train it on their own data. * **[2023.11.20]** ๐Ÿš€๐Ÿš€๐Ÿš€ [Video-LLaVA](https://github.com/PKU-YuanGroup/Video-LLaVA) builds a large visual-language model to achieve ๐ŸŽ‰SOTA performances based on LanguageBind encoders. * **[2023.10.23]** ๐ŸŽถ LanguageBind-Audio achieves ๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰**state-of-the-art (SOTA) performance on 5 datasets**, checking our โœจ [results](#multiple-modalities)! * **[2023.10.14]** ๐Ÿ˜ฑ Released a stronger LanguageBind-Video, checking our โœจ [results](#video-language)! The video checkpoint **have updated** on Huggingface Model Hub! * **[2023.10.10]** We provide sample data, which can be found in [assets](assets), and [emergency zero-shot usage](#emergency-zero-shot) is described. * **[2023.10.07]** The checkpoints are available on ๐Ÿค— [Huggingface Model](https://huggingface.co/LanguageBind). * **[2023.10.04]** Code and [demo](https://huggingface.co/spaces/LanguageBind/LanguageBind) are available now! Welcome to **watch** ๐Ÿ‘€ this repository for the latest updates. ## ๐Ÿ˜ฎ Highlights ### ๐Ÿ’ก High performance, but NO intermediate modality required LanguageBind is a **language-centric** multimodal pretraining approach, **taking the language as the bind across different modalities** because the language modality is well-explored and contains rich semantics. * The following first figure shows the architecture of LanguageBind. LanguageBind can be easily extended to segmentation, detection tasks, and potentially to unlimited modalities. ### โšก๏ธ A multimodal, fully aligned and voluminous dataset We propose **VIDAL-10M**, **10 Million data** with **V**ideo, **I**nfrared, **D**epth, **A**udio and their corresponding **L**anguage, which greatly expands the data beyond visual modalities. * The second figure shows our proposed VIDAL-10M dataset, which includes five modalities: video, infrared, depth, audio, and language. ### ๐Ÿ”ฅ Multi-view enhanced description for training We make multi-view enhancements to language. We produce multi-view description that combines **meta-data**, **spatial**, and **temporal** to greatly enhance the semantic information of the language. In addition we further **enhance the language with ChatGPT** to create a good semantic space for each modality aligned language. ## ๐Ÿค— Demo * **Local demo.** Highly recommend trying out our web demo, which incorporates all features currently supported by LanguageBind. ```bash python gradio_app.py ``` * **Online demo.** We provide the [online demo](https://huggingface.co/spaces/LanguageBind/LanguageBind) in Huggingface Spaces. In this demo, you can calculate the similarity of modalities to language, such as audio-to-language, video-to-language, and depth-to-image. ## ๐Ÿ› ๏ธ Requirements and Installation * Python >= 3.8 * Pytorch >= 1.13.1 * CUDA Version >= 11.6 * Install required packages: ```bash git clone https://github.com/PKU-YuanGroup/LanguageBind cd LanguageBind pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu116 pip install -r requirements.txt ``` ## ๐Ÿณ Model Zoo The names in the table represent different encoder models. For example, `LanguageBind/LanguageBind_Video_FT` represents the fully fine-tuned version, while `LanguageBind/LanguageBind_Video` represents the LoRA-tuned version. You can freely replace them in the recommended [API usage](#-api). We recommend using the fully fine-tuned version, as it offers stronger performance. <div align="center"> <table border="1" width="100%"> <tr align="center"> <th>Modality</th><th>LoRA tuning</th><th>Fine-tuning</th> </tr> <tr align="center"> <td>Video</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video">LanguageBind_Video</a></td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video_FT">LanguageBind_Video_FT</a></td> </tr> <tr align="center"> <td>Audio</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Audio">LanguageBind_Audio</a></td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Audio_FT">LanguageBind_Audio_FT</a></td> </tr> <tr align="center"> <td>Depth</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Depth">LanguageBind_Depth</a></td><td>-</td> </tr> <tr align="center"> <td>Thermal</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Thermal">LanguageBind_Thermal</a></td><td>-</td> </tr> </table> </div> <div align="center"> <table border="1" width="100%"> <tr align="center"> <th>Version</th><th>Tuning</th><th>Model size</th><th>Num_frames</th><th>HF Link</th><th>MSR-VTT</th><th>DiDeMo</th><th>ActivityNet</th><th>MSVD</th> </tr> <tr align="center"> <td>LanguageBind_Video</td><td>LoRA</td><td>Large</td><td>8</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video">Link</a></td><td>42.6</td><td>37.8</td><td>35.1</td><td>52.2</td> </tr> <tr align="center"> <td>LanguageBind_Video_FT</td><td>Full-tuning</td><td>Large</td><td>8</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video_FT">Link</a></td><td>42.7</td><td>38.1</td><td>36.9</td><td>53.5</td> </tr> <tr align="center"> <td>LanguageBind_Video_V1.5_FT</td><td>Full-tuning</td><td>Large</td><td>8</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video_V1.5_FT">Link</a></td><td>42.8</td><td>39.7</td><td>38.4</td><td>54.1</td> </tr> <tr align="center"> <td>LanguageBind_Video_V1.5_FT</td><td>Full-tuning</td><td>Large</td><td>12</td><td>Coming soon</td> </tr> <tr align="center"> <td>LanguageBind_Video_Huge_V1.5_FT</td><td>Full-tuning</td><td>Huge</td><td>8</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video_Huge_V1.5_FT">Link</a></td><td>44.8</td><td>39.9</td><td>41.0</td><td>53.7</td> </tr> <tr align="center"> <td>LanguageBind_Video_Huge_V1.5_FT</td><td>Full-tuning</td><td>Huge</td><td>12</td><td>Coming soon</td> </tr> </table> </div> ## ๐Ÿค– API **We open source all modalities preprocessing code.** If you want to load the model (e.g. ```LanguageBind/LanguageBind_Thermal```) from the model hub on Huggingface or on local, you can use the following code snippets! ### Inference for Multi-modal Binding We have provided some sample datasets in [assets](assets) to quickly see how languagebind works. ```python import torch from languagebind import LanguageBind, to_device, transform_dict, LanguageBindImageTokenizer if __name__ == '__main__': device = 'cuda:0' device = torch.device(device) clip_type = { 'video': 'LanguageBind_Video_FT', # also LanguageBind_Video 'audio': 'LanguageBind_Audio_FT', # also LanguageBind_Audio 'thermal': 'LanguageBind_Thermal', 'image': 'LanguageBind_Image', 'depth': 'LanguageBind_Depth', } model = LanguageBind(clip_type=clip_type, cache_dir='./cache_dir') model = model.to(device) model.eval() pretrained_ckpt = f'lb203/LanguageBind_Image' tokenizer = LanguageBindImageTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir/tokenizer_cache_dir') modality_transform = {c: transform_dict[c](model.modality_config[c]) for c in clip_type.keys()} image = ['assets/image/0.jpg', 'assets/image/1.jpg'] audio = ['assets/audio/0.wav', 'assets/audio/1.wav'] video = ['assets/video/0.mp4', 'assets/video/1.mp4'] depth = ['assets/depth/0.png', 'assets/depth/1.png'] thermal = ['assets/thermal/0.jpg', 'assets/thermal/1.jpg'] language = ["Training a parakeet to climb up a ladder.", 'A lion climbing a tree to catch a monkey.'] inputs = { 'image': to_device(modality_transform['image'](image), device), 'video': to_device(modality_transform['video'](video), device), 'audio': to_device(modality_transform['audio'](audio), device), 'depth': to_device(modality_transform['depth'](depth), device), 'thermal': to_device(modality_transform['thermal'](thermal), device), } inputs['language'] = to_device(tokenizer(language, max_length=77, padding='max_length', truncation=True, return_tensors='pt'), device) with torch.no_grad(): embeddings = model(inputs) print("Video x Text: \n", torch.softmax(embeddings['video'] @ embeddings['language'].T, dim=-1).detach().cpu().numpy()) print("Image x Text: \n", torch.softmax(embeddings['image'] @ embeddings['language'].T, dim=-1).detach().cpu().numpy()) print("Depth x Text: \n", torch.softmax(embeddings['depth'] @ embeddings['language'].T, dim=-1).detach().cpu().numpy()) print("Audio x Text: \n", torch.softmax(embeddings['audio'] @ embeddings['language'].T, dim=-1).detach().cpu().numpy()) print("Thermal x Text: \n", torch.softmax(embeddings['thermal'] @ embeddings['language'].T, dim=-1).detach().cpu().numpy()) ``` Then returns the following result. ```bash Video x Text: [[9.9989331e-01 1.0667283e-04] [1.3255903e-03 9.9867439e-01]] Image x Text: [[9.9990666e-01 9.3292067e-05] [4.6132666e-08 1.0000000e+00]] Depth x Text: [[0.9954276 0.00457235] [0.12042473 0.8795753 ]] Audio x Text: [[0.97634876 0.02365119] [0.02917843 0.97082156]] Thermal x Text: [[0.9482511 0.0517489 ] [0.48746133 0.5125386 ]] ``` ### Emergency zero-shot Since languagebind binds each modality together, we also found the **emergency zero-shot**. It's very simple to use. ```python print("Video x Audio: \n", torch.softmax(embeddings['video'] @ embeddings['audio'].T, dim=-1).detach().cpu().numpy()) print("Image x Depth: \n", torch.softmax(embeddings['image'] @ embeddings['depth'].T, dim=-1).detach().cpu().numpy()) print("Image x Thermal: \n", torch.softmax(embeddings['image'] @ embeddings['thermal'].T, dim=-1).detach().cpu().numpy()) ``` Then, you will get: ``` Video x Audio: [[1.0000000e+00 0.0000000e+00] [3.1150486e-32 1.0000000e+00]] Image x Depth: [[1. 0.] [0. 1.]] Image x Thermal: [[1. 0.] [0. 1.]] ``` ### Different branches for X-Language task Additionally, LanguageBind can be **disassembled into different branches** to handle different tasks. Note that we do not train Image, which just initialize from OpenCLIP. #### Thermal ```python import torch from languagebind import LanguageBindThermal, LanguageBindThermalTokenizer, LanguageBindThermalProcessor pretrained_ckpt = 'LanguageBind/LanguageBind_Thermal' model = LanguageBindThermal.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') tokenizer = LanguageBindThermalTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') thermal_process = LanguageBindThermalProcessor(model.config, tokenizer) model.eval() data = thermal_process([r"your/thermal.jpg"], ['your text'], return_tensors='pt') with torch.no_grad(): out = model(**data) print(out.text_embeds @ out.image_embeds.T) ``` #### Depth ```python import torch from languagebind import LanguageBindDepth, LanguageBindDepthTokenizer, LanguageBindDepthProcessor pretrained_ckpt = 'LanguageBind/LanguageBind_Depth' model = LanguageBindDepth.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') tokenizer = LanguageBindDepthTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') depth_process = LanguageBindDepthProcessor(model.config, tokenizer) model.eval() data = depth_process([r"your/depth.png"], ['your text.'], return_tensors='pt') with torch.no_grad(): out = model(**data) print(out.text_embeds @ out.image_embeds.T) ``` #### Video ```python import torch from languagebind import LanguageBindVideo, LanguageBindVideoTokenizer, LanguageBindVideoProcessor pretrained_ckpt = 'LanguageBind/LanguageBind_Video_FT' # also 'LanguageBind/LanguageBind_Video' model = LanguageBindVideo.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') tokenizer = LanguageBindVideoTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') video_process = LanguageBindVideoProcessor(model.config, tokenizer) model.eval() data = video_process(["your/video.mp4"], ['your text.'], return_tensors='pt') with torch.no_grad(): out = model(**data) print(out.text_embeds @ out.image_embeds.T) ``` #### Audio ```python import torch from languagebind import LanguageBindAudio, LanguageBindAudioTokenizer, LanguageBindAudioProcessor pretrained_ckpt = 'LanguageBind/LanguageBind_Audio_FT' # also 'LanguageBind/LanguageBind_Audio' model = LanguageBindAudio.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') tokenizer = LanguageBindAudioTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') audio_process = LanguageBindAudioProcessor(model.config, tokenizer) model.eval() data = audio_process([r"your/audio.wav"], ['your audio.'], return_tensors='pt') with torch.no_grad(): out = model(**data) print(out.text_embeds @ out.image_embeds.T) ``` #### Image Note that our image encoder is the same as OpenCLIP. **Not** as fine-tuned as other modalities. ```python import torch from languagebind import LanguageBindImage, LanguageBindImageTokenizer, LanguageBindImageProcessor pretrained_ckpt = 'LanguageBind/LanguageBind_Image' model = LanguageBindImage.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') tokenizer = LanguageBindImageTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') image_process = LanguageBindImageProcessor(model.config, tokenizer) model.eval() data = image_process([r"your/image.jpg"], ['your text.'], return_tensors='pt') with torch.no_grad(): out = model(**data) print(out.text_embeds @ out.image_embeds.T) ``` ## ๐Ÿ’ฅ VIDAL-10M The datasets is in [DATASETS.md](DATASETS.md). ## ๐Ÿ—๏ธ Training & Validating The training & validating instruction is in [TRAIN_AND_VALIDATE.md](TRAIN_AND_VALIDATE.md). ## ๐Ÿ‘ Acknowledgement * [OpenCLIP](https://github.com/mlfoundations/open_clip) An open source pretraining framework. * [CLIP4Clip](https://github.com/ArrowLuo/CLIP4Clip) An open source Video-Text retrieval framework. * [sRGB-TIR](https://github.com/rpmsnu/sRGB-TIR) An open source framework to generate infrared (thermal) images. * [GLPN](https://github.com/vinvino02/GLPDepth) An open source framework to generate depth images. ## ๐Ÿ”’ License * The majority of this project is released under the MIT license as found in the [LICENSE](https://github.com/PKU-YuanGroup/LanguageBind/blob/main/LICENSE) file. * The dataset of this project is released under the CC-BY-NC 4.0 license as found in the [DATASET_LICENSE](https://github.com/PKU-YuanGroup/LanguageBind/blob/main/DATASET_LICENSE) file. ## โœ๏ธ Citation If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil:. ```BibTeX @misc{zhu2023languagebind, title={LanguageBind: Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment}, author={Bin Zhu and Bin Lin and Munan Ning and Yang Yan and Jiaxi Cui and Wang HongFa and Yatian Pang and Wenhao Jiang and Junwu Zhang and Zongwei Li and Cai Wan Zhang and Zhifeng Li and Wei Liu and Li Yuan}, year={2023}, eprint={2310.01852}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ## โœจ Star History [![Star History](https://api.star-history.com/svg?repos=PKU-YuanGroup/LanguageBind&type=Date)](https://star-history.com/#PKU-YuanGroup/LanguageBind&Date) ## ๐Ÿค Contributors <a href="https://github.com/PKU-YuanGroup/LanguageBind/graphs/contributors"> <img src="https://contrib.rocks/image?repo=PKU-YuanGroup/LanguageBind" /> </a>
LanguageBind/LanguageBind_Video
LanguageBind
2024-02-01T06:57:36Z
364
2
transformers
[ "transformers", "pytorch", "LanguageBindVideo", "zero-shot-image-classification", "arxiv:2310.01852", "license:mit", "endpoints_compatible", "region:us" ]
zero-shot-image-classification
2023-10-06T09:07:15Z
--- license: mit --- <p align="center"> <img src="https://s11.ax1x.com/2024/02/01/pFMDAm9.png" width="250" style="margin-bottom: 0.2;"/> <p> <h2 align="center"> <a href="https://arxiv.org/pdf/2310.01852.pdf">ใ€ICLR 2024 ๐Ÿ”ฅใ€‘LanguageBind: Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment</a></h2> <h5 align="center"> If you like our project, please give us a star โญ on GitHub for latest update. </h2> ## ๐Ÿ“ฐ News * **[2024.01.27]** ๐Ÿ‘€๐Ÿ‘€๐Ÿ‘€ Our [MoE-LLaVA](https://github.com/PKU-YuanGroup/MoE-LLaVA) is released! A sparse model with 3B parameters outperformed the dense model with 7B parameters. * **[2024.01.16]** ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ Our LanguageBind has been accepted at ICLR 2024! We earn the score of 6(3)8(6)6(6)6(6) [here](https://openreview.net/forum?id=QmZKc7UZCy&noteId=OgsxQxAleA). * **[2023.12.15]** ๐Ÿ’ช๐Ÿ’ช๐Ÿ’ช We expand the ๐Ÿ’ฅ๐Ÿ’ฅ๐Ÿ’ฅ VIDAL dataset and now have **10M video-text data**. We launch **LanguageBind_Video 1.5**, checking our [model zoo](#-model-zoo). * **[2023.12.10]** We expand the ๐Ÿ’ฅ๐Ÿ’ฅ๐Ÿ’ฅ VIDAL dataset and now have **10M depth and 10M thermal data**. We are in the process of uploading thermal and depth data on [Hugging Face](https://huggingface.co/datasets/LanguageBind/VIDAL-Depth-Thermal) and expect the whole process to last 1-2 months. * **[2023.11.27]** ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ We have updated our [paper](https://arxiv.org/abs/2310.01852) with emergency zero-shot results., checking our โœจ [results](#emergency-results). * **[2023.11.26]** ๐Ÿ’ฅ๐Ÿ’ฅ๐Ÿ’ฅ We have open-sourced all textual sources and corresponding YouTube IDs [here](DATASETS.md). * **[2023.11.26]** ๐Ÿ“ฃ๐Ÿ“ฃ๐Ÿ“ฃ We have open-sourced fully fine-tuned **Video & Audio**, achieving improved performance once again, checking our [model zoo](#-model-zoo). * **[2023.11.22]** We are about to release a fully fine-tuned version, and the **HUGE** version is currently undergoing training. * **[2023.11.21]** ๐Ÿ’ฅ We are releasing sample data in [DATASETS.md](DATASETS.md) so that individuals who are interested can further modify the code to train it on their own data. * **[2023.11.20]** ๐Ÿš€๐Ÿš€๐Ÿš€ [Video-LLaVA](https://github.com/PKU-YuanGroup/Video-LLaVA) builds a large visual-language model to achieve ๐ŸŽ‰SOTA performances based on LanguageBind encoders. * **[2023.10.23]** ๐ŸŽถ LanguageBind-Audio achieves ๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰**state-of-the-art (SOTA) performance on 5 datasets**, checking our โœจ [results](#multiple-modalities)! * **[2023.10.14]** ๐Ÿ˜ฑ Released a stronger LanguageBind-Video, checking our โœจ [results](#video-language)! The video checkpoint **have updated** on Huggingface Model Hub! * **[2023.10.10]** We provide sample data, which can be found in [assets](assets), and [emergency zero-shot usage](#emergency-zero-shot) is described. * **[2023.10.07]** The checkpoints are available on ๐Ÿค— [Huggingface Model](https://huggingface.co/LanguageBind). * **[2023.10.04]** Code and [demo](https://huggingface.co/spaces/LanguageBind/LanguageBind) are available now! Welcome to **watch** ๐Ÿ‘€ this repository for the latest updates. ## ๐Ÿ˜ฎ Highlights ### ๐Ÿ’ก High performance, but NO intermediate modality required LanguageBind is a **language-centric** multimodal pretraining approach, **taking the language as the bind across different modalities** because the language modality is well-explored and contains rich semantics. * The following first figure shows the architecture of LanguageBind. LanguageBind can be easily extended to segmentation, detection tasks, and potentially to unlimited modalities. ### โšก๏ธ A multimodal, fully aligned and voluminous dataset We propose **VIDAL-10M**, **10 Million data** with **V**ideo, **I**nfrared, **D**epth, **A**udio and their corresponding **L**anguage, which greatly expands the data beyond visual modalities. * The second figure shows our proposed VIDAL-10M dataset, which includes five modalities: video, infrared, depth, audio, and language. ### ๐Ÿ”ฅ Multi-view enhanced description for training We make multi-view enhancements to language. We produce multi-view description that combines **meta-data**, **spatial**, and **temporal** to greatly enhance the semantic information of the language. In addition we further **enhance the language with ChatGPT** to create a good semantic space for each modality aligned language. ## ๐Ÿค— Demo * **Local demo.** Highly recommend trying out our web demo, which incorporates all features currently supported by LanguageBind. ```bash python gradio_app.py ``` * **Online demo.** We provide the [online demo](https://huggingface.co/spaces/LanguageBind/LanguageBind) in Huggingface Spaces. In this demo, you can calculate the similarity of modalities to language, such as audio-to-language, video-to-language, and depth-to-image. ## ๐Ÿ› ๏ธ Requirements and Installation * Python >= 3.8 * Pytorch >= 1.13.1 * CUDA Version >= 11.6 * Install required packages: ```bash git clone https://github.com/PKU-YuanGroup/LanguageBind cd LanguageBind pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu116 pip install -r requirements.txt ``` ## ๐Ÿณ Model Zoo The names in the table represent different encoder models. For example, `LanguageBind/LanguageBind_Video_FT` represents the fully fine-tuned version, while `LanguageBind/LanguageBind_Video` represents the LoRA-tuned version. You can freely replace them in the recommended [API usage](#-api). We recommend using the fully fine-tuned version, as it offers stronger performance. <div align="center"> <table border="1" width="100%"> <tr align="center"> <th>Modality</th><th>LoRA tuning</th><th>Fine-tuning</th> </tr> <tr align="center"> <td>Video</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video">LanguageBind_Video</a></td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video_FT">LanguageBind_Video_FT</a></td> </tr> <tr align="center"> <td>Audio</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Audio">LanguageBind_Audio</a></td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Audio_FT">LanguageBind_Audio_FT</a></td> </tr> <tr align="center"> <td>Depth</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Depth">LanguageBind_Depth</a></td><td>-</td> </tr> <tr align="center"> <td>Thermal</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Thermal">LanguageBind_Thermal</a></td><td>-</td> </tr> </table> </div> <div align="center"> <table border="1" width="100%"> <tr align="center"> <th>Version</th><th>Tuning</th><th>Model size</th><th>Num_frames</th><th>HF Link</th><th>MSR-VTT</th><th>DiDeMo</th><th>ActivityNet</th><th>MSVD</th> </tr> <tr align="center"> <td>LanguageBind_Video</td><td>LoRA</td><td>Large</td><td>8</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video">Link</a></td><td>42.6</td><td>37.8</td><td>35.1</td><td>52.2</td> </tr> <tr align="center"> <td>LanguageBind_Video_FT</td><td>Full-tuning</td><td>Large</td><td>8</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video_FT">Link</a></td><td>42.7</td><td>38.1</td><td>36.9</td><td>53.5</td> </tr> <tr align="center"> <td>LanguageBind_Video_V1.5_FT</td><td>Full-tuning</td><td>Large</td><td>8</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video_V1.5_FT">Link</a></td><td>42.8</td><td>39.7</td><td>38.4</td><td>54.1</td> </tr> <tr align="center"> <td>LanguageBind_Video_V1.5_FT</td><td>Full-tuning</td><td>Large</td><td>12</td><td>Coming soon</td> </tr> <tr align="center"> <td>LanguageBind_Video_Huge_V1.5_FT</td><td>Full-tuning</td><td>Huge</td><td>8</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video_Huge_V1.5_FT">Link</a></td><td>44.8</td><td>39.9</td><td>41.0</td><td>53.7</td> </tr> <tr align="center"> <td>LanguageBind_Video_Huge_V1.5_FT</td><td>Full-tuning</td><td>Huge</td><td>12</td><td>Coming soon</td> </tr> </table> </div> ## ๐Ÿค– API **We open source all modalities preprocessing code.** If you want to load the model (e.g. ```LanguageBind/LanguageBind_Thermal```) from the model hub on Huggingface or on local, you can use the following code snippets! ### Inference for Multi-modal Binding We have provided some sample datasets in [assets](assets) to quickly see how languagebind works. ```python import torch from languagebind import LanguageBind, to_device, transform_dict, LanguageBindImageTokenizer if __name__ == '__main__': device = 'cuda:0' device = torch.device(device) clip_type = { 'video': 'LanguageBind_Video_FT', # also LanguageBind_Video 'audio': 'LanguageBind_Audio_FT', # also LanguageBind_Audio 'thermal': 'LanguageBind_Thermal', 'image': 'LanguageBind_Image', 'depth': 'LanguageBind_Depth', } model = LanguageBind(clip_type=clip_type, cache_dir='./cache_dir') model = model.to(device) model.eval() pretrained_ckpt = f'lb203/LanguageBind_Image' tokenizer = LanguageBindImageTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir/tokenizer_cache_dir') modality_transform = {c: transform_dict[c](model.modality_config[c]) for c in clip_type.keys()} image = ['assets/image/0.jpg', 'assets/image/1.jpg'] audio = ['assets/audio/0.wav', 'assets/audio/1.wav'] video = ['assets/video/0.mp4', 'assets/video/1.mp4'] depth = ['assets/depth/0.png', 'assets/depth/1.png'] thermal = ['assets/thermal/0.jpg', 'assets/thermal/1.jpg'] language = ["Training a parakeet to climb up a ladder.", 'A lion climbing a tree to catch a monkey.'] inputs = { 'image': to_device(modality_transform['image'](image), device), 'video': to_device(modality_transform['video'](video), device), 'audio': to_device(modality_transform['audio'](audio), device), 'depth': to_device(modality_transform['depth'](depth), device), 'thermal': to_device(modality_transform['thermal'](thermal), device), } inputs['language'] = to_device(tokenizer(language, max_length=77, padding='max_length', truncation=True, return_tensors='pt'), device) with torch.no_grad(): embeddings = model(inputs) print("Video x Text: \n", torch.softmax(embeddings['video'] @ embeddings['language'].T, dim=-1).detach().cpu().numpy()) print("Image x Text: \n", torch.softmax(embeddings['image'] @ embeddings['language'].T, dim=-1).detach().cpu().numpy()) print("Depth x Text: \n", torch.softmax(embeddings['depth'] @ embeddings['language'].T, dim=-1).detach().cpu().numpy()) print("Audio x Text: \n", torch.softmax(embeddings['audio'] @ embeddings['language'].T, dim=-1).detach().cpu().numpy()) print("Thermal x Text: \n", torch.softmax(embeddings['thermal'] @ embeddings['language'].T, dim=-1).detach().cpu().numpy()) ``` Then returns the following result. ```bash Video x Text: [[9.9989331e-01 1.0667283e-04] [1.3255903e-03 9.9867439e-01]] Image x Text: [[9.9990666e-01 9.3292067e-05] [4.6132666e-08 1.0000000e+00]] Depth x Text: [[0.9954276 0.00457235] [0.12042473 0.8795753 ]] Audio x Text: [[0.97634876 0.02365119] [0.02917843 0.97082156]] Thermal x Text: [[0.9482511 0.0517489 ] [0.48746133 0.5125386 ]] ``` ### Emergency zero-shot Since languagebind binds each modality together, we also found the **emergency zero-shot**. It's very simple to use. ```python print("Video x Audio: \n", torch.softmax(embeddings['video'] @ embeddings['audio'].T, dim=-1).detach().cpu().numpy()) print("Image x Depth: \n", torch.softmax(embeddings['image'] @ embeddings['depth'].T, dim=-1).detach().cpu().numpy()) print("Image x Thermal: \n", torch.softmax(embeddings['image'] @ embeddings['thermal'].T, dim=-1).detach().cpu().numpy()) ``` Then, you will get: ``` Video x Audio: [[1.0000000e+00 0.0000000e+00] [3.1150486e-32 1.0000000e+00]] Image x Depth: [[1. 0.] [0. 1.]] Image x Thermal: [[1. 0.] [0. 1.]] ``` ### Different branches for X-Language task Additionally, LanguageBind can be **disassembled into different branches** to handle different tasks. Note that we do not train Image, which just initialize from OpenCLIP. #### Thermal ```python import torch from languagebind import LanguageBindThermal, LanguageBindThermalTokenizer, LanguageBindThermalProcessor pretrained_ckpt = 'LanguageBind/LanguageBind_Thermal' model = LanguageBindThermal.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') tokenizer = LanguageBindThermalTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') thermal_process = LanguageBindThermalProcessor(model.config, tokenizer) model.eval() data = thermal_process([r"your/thermal.jpg"], ['your text'], return_tensors='pt') with torch.no_grad(): out = model(**data) print(out.text_embeds @ out.image_embeds.T) ``` #### Depth ```python import torch from languagebind import LanguageBindDepth, LanguageBindDepthTokenizer, LanguageBindDepthProcessor pretrained_ckpt = 'LanguageBind/LanguageBind_Depth' model = LanguageBindDepth.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') tokenizer = LanguageBindDepthTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') depth_process = LanguageBindDepthProcessor(model.config, tokenizer) model.eval() data = depth_process([r"your/depth.png"], ['your text.'], return_tensors='pt') with torch.no_grad(): out = model(**data) print(out.text_embeds @ out.image_embeds.T) ``` #### Video ```python import torch from languagebind import LanguageBindVideo, LanguageBindVideoTokenizer, LanguageBindVideoProcessor pretrained_ckpt = 'LanguageBind/LanguageBind_Video_FT' # also 'LanguageBind/LanguageBind_Video' model = LanguageBindVideo.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') tokenizer = LanguageBindVideoTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') video_process = LanguageBindVideoProcessor(model.config, tokenizer) model.eval() data = video_process(["your/video.mp4"], ['your text.'], return_tensors='pt') with torch.no_grad(): out = model(**data) print(out.text_embeds @ out.image_embeds.T) ``` #### Audio ```python import torch from languagebind import LanguageBindAudio, LanguageBindAudioTokenizer, LanguageBindAudioProcessor pretrained_ckpt = 'LanguageBind/LanguageBind_Audio_FT' # also 'LanguageBind/LanguageBind_Audio' model = LanguageBindAudio.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') tokenizer = LanguageBindAudioTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') audio_process = LanguageBindAudioProcessor(model.config, tokenizer) model.eval() data = audio_process([r"your/audio.wav"], ['your audio.'], return_tensors='pt') with torch.no_grad(): out = model(**data) print(out.text_embeds @ out.image_embeds.T) ``` #### Image Note that our image encoder is the same as OpenCLIP. **Not** as fine-tuned as other modalities. ```python import torch from languagebind import LanguageBindImage, LanguageBindImageTokenizer, LanguageBindImageProcessor pretrained_ckpt = 'LanguageBind/LanguageBind_Image' model = LanguageBindImage.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') tokenizer = LanguageBindImageTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') image_process = LanguageBindImageProcessor(model.config, tokenizer) model.eval() data = image_process([r"your/image.jpg"], ['your text.'], return_tensors='pt') with torch.no_grad(): out = model(**data) print(out.text_embeds @ out.image_embeds.T) ``` ## ๐Ÿ’ฅ VIDAL-10M The datasets is in [DATASETS.md](DATASETS.md). ## ๐Ÿ—๏ธ Training & Validating The training & validating instruction is in [TRAIN_AND_VALIDATE.md](TRAIN_AND_VALIDATE.md). ## ๐Ÿ‘ Acknowledgement * [OpenCLIP](https://github.com/mlfoundations/open_clip) An open source pretraining framework. * [CLIP4Clip](https://github.com/ArrowLuo/CLIP4Clip) An open source Video-Text retrieval framework. * [sRGB-TIR](https://github.com/rpmsnu/sRGB-TIR) An open source framework to generate infrared (thermal) images. * [GLPN](https://github.com/vinvino02/GLPDepth) An open source framework to generate depth images. ## ๐Ÿ”’ License * The majority of this project is released under the MIT license as found in the [LICENSE](https://github.com/PKU-YuanGroup/LanguageBind/blob/main/LICENSE) file. * The dataset of this project is released under the CC-BY-NC 4.0 license as found in the [DATASET_LICENSE](https://github.com/PKU-YuanGroup/LanguageBind/blob/main/DATASET_LICENSE) file. ## โœ๏ธ Citation If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil:. ```BibTeX @misc{zhu2023languagebind, title={LanguageBind: Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment}, author={Bin Zhu and Bin Lin and Munan Ning and Yang Yan and Jiaxi Cui and Wang HongFa and Yatian Pang and Wenhao Jiang and Junwu Zhang and Zongwei Li and Cai Wan Zhang and Zhifeng Li and Wei Liu and Li Yuan}, year={2023}, eprint={2310.01852}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ## โœจ Star History [![Star History](https://api.star-history.com/svg?repos=PKU-YuanGroup/LanguageBind&type=Date)](https://star-history.com/#PKU-YuanGroup/LanguageBind&Date) ## ๐Ÿค Contributors <a href="https://github.com/PKU-YuanGroup/LanguageBind/graphs/contributors"> <img src="https://contrib.rocks/image?repo=PKU-YuanGroup/LanguageBind" /> </a>
LanguageBind/LanguageBind_Depth
LanguageBind
2024-02-01T06:57:09Z
244
0
transformers
[ "transformers", "pytorch", "LanguageBindDepth", "zero-shot-image-classification", "arxiv:2310.01852", "license:mit", "endpoints_compatible", "region:us" ]
zero-shot-image-classification
2023-10-06T09:07:38Z
--- license: mit --- <p align="center"> <img src="https://s11.ax1x.com/2024/02/01/pFMDAm9.png" width="250" style="margin-bottom: 0.2;"/> <p> <h2 align="center"> <a href="https://arxiv.org/pdf/2310.01852.pdf">ใ€ICLR 2024 ๐Ÿ”ฅใ€‘LanguageBind: Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment</a></h2> <h5 align="center"> If you like our project, please give us a star โญ on GitHub for latest update. </h2> ## ๐Ÿ“ฐ News * **[2024.01.27]** ๐Ÿ‘€๐Ÿ‘€๐Ÿ‘€ Our [MoE-LLaVA](https://github.com/PKU-YuanGroup/MoE-LLaVA) is released! A sparse model with 3B parameters outperformed the dense model with 7B parameters. * **[2024.01.16]** ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ Our LanguageBind has been accepted at ICLR 2024! We earn the score of 6(3)8(6)6(6)6(6) [here](https://openreview.net/forum?id=QmZKc7UZCy&noteId=OgsxQxAleA). * **[2023.12.15]** ๐Ÿ’ช๐Ÿ’ช๐Ÿ’ช We expand the ๐Ÿ’ฅ๐Ÿ’ฅ๐Ÿ’ฅ VIDAL dataset and now have **10M video-text data**. We launch **LanguageBind_Video 1.5**, checking our [model zoo](#-model-zoo). * **[2023.12.10]** We expand the ๐Ÿ’ฅ๐Ÿ’ฅ๐Ÿ’ฅ VIDAL dataset and now have **10M depth and 10M thermal data**. We are in the process of uploading thermal and depth data on [Hugging Face](https://huggingface.co/datasets/LanguageBind/VIDAL-Depth-Thermal) and expect the whole process to last 1-2 months. * **[2023.11.27]** ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ We have updated our [paper](https://arxiv.org/abs/2310.01852) with emergency zero-shot results., checking our โœจ [results](#emergency-results). * **[2023.11.26]** ๐Ÿ’ฅ๐Ÿ’ฅ๐Ÿ’ฅ We have open-sourced all textual sources and corresponding YouTube IDs [here](DATASETS.md). * **[2023.11.26]** ๐Ÿ“ฃ๐Ÿ“ฃ๐Ÿ“ฃ We have open-sourced fully fine-tuned **Video & Audio**, achieving improved performance once again, checking our [model zoo](#-model-zoo). * **[2023.11.22]** We are about to release a fully fine-tuned version, and the **HUGE** version is currently undergoing training. * **[2023.11.21]** ๐Ÿ’ฅ We are releasing sample data in [DATASETS.md](DATASETS.md) so that individuals who are interested can further modify the code to train it on their own data. * **[2023.11.20]** ๐Ÿš€๐Ÿš€๐Ÿš€ [Video-LLaVA](https://github.com/PKU-YuanGroup/Video-LLaVA) builds a large visual-language model to achieve ๐ŸŽ‰SOTA performances based on LanguageBind encoders. * **[2023.10.23]** ๐ŸŽถ LanguageBind-Audio achieves ๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰**state-of-the-art (SOTA) performance on 5 datasets**, checking our โœจ [results](#multiple-modalities)! * **[2023.10.14]** ๐Ÿ˜ฑ Released a stronger LanguageBind-Video, checking our โœจ [results](#video-language)! The video checkpoint **have updated** on Huggingface Model Hub! * **[2023.10.10]** We provide sample data, which can be found in [assets](assets), and [emergency zero-shot usage](#emergency-zero-shot) is described. * **[2023.10.07]** The checkpoints are available on ๐Ÿค— [Huggingface Model](https://huggingface.co/LanguageBind). * **[2023.10.04]** Code and [demo](https://huggingface.co/spaces/LanguageBind/LanguageBind) are available now! Welcome to **watch** ๐Ÿ‘€ this repository for the latest updates. ## ๐Ÿ˜ฎ Highlights ### ๐Ÿ’ก High performance, but NO intermediate modality required LanguageBind is a **language-centric** multimodal pretraining approach, **taking the language as the bind across different modalities** because the language modality is well-explored and contains rich semantics. * The following first figure shows the architecture of LanguageBind. LanguageBind can be easily extended to segmentation, detection tasks, and potentially to unlimited modalities. ### โšก๏ธ A multimodal, fully aligned and voluminous dataset We propose **VIDAL-10M**, **10 Million data** with **V**ideo, **I**nfrared, **D**epth, **A**udio and their corresponding **L**anguage, which greatly expands the data beyond visual modalities. * The second figure shows our proposed VIDAL-10M dataset, which includes five modalities: video, infrared, depth, audio, and language. ### ๐Ÿ”ฅ Multi-view enhanced description for training We make multi-view enhancements to language. We produce multi-view description that combines **meta-data**, **spatial**, and **temporal** to greatly enhance the semantic information of the language. In addition we further **enhance the language with ChatGPT** to create a good semantic space for each modality aligned language. ## ๐Ÿค— Demo * **Local demo.** Highly recommend trying out our web demo, which incorporates all features currently supported by LanguageBind. ```bash python gradio_app.py ``` * **Online demo.** We provide the [online demo](https://huggingface.co/spaces/LanguageBind/LanguageBind) in Huggingface Spaces. In this demo, you can calculate the similarity of modalities to language, such as audio-to-language, video-to-language, and depth-to-image. ## ๐Ÿ› ๏ธ Requirements and Installation * Python >= 3.8 * Pytorch >= 1.13.1 * CUDA Version >= 11.6 * Install required packages: ```bash git clone https://github.com/PKU-YuanGroup/LanguageBind cd LanguageBind pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu116 pip install -r requirements.txt ``` ## ๐Ÿณ Model Zoo The names in the table represent different encoder models. For example, `LanguageBind/LanguageBind_Video_FT` represents the fully fine-tuned version, while `LanguageBind/LanguageBind_Video` represents the LoRA-tuned version. You can freely replace them in the recommended [API usage](#-api). We recommend using the fully fine-tuned version, as it offers stronger performance. <div align="center"> <table border="1" width="100%"> <tr align="center"> <th>Modality</th><th>LoRA tuning</th><th>Fine-tuning</th> </tr> <tr align="center"> <td>Video</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video">LanguageBind_Video</a></td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video_FT">LanguageBind_Video_FT</a></td> </tr> <tr align="center"> <td>Audio</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Audio">LanguageBind_Audio</a></td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Audio_FT">LanguageBind_Audio_FT</a></td> </tr> <tr align="center"> <td>Depth</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Depth">LanguageBind_Depth</a></td><td>-</td> </tr> <tr align="center"> <td>Thermal</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Thermal">LanguageBind_Thermal</a></td><td>-</td> </tr> </table> </div> <div align="center"> <table border="1" width="100%"> <tr align="center"> <th>Version</th><th>Tuning</th><th>Model size</th><th>Num_frames</th><th>HF Link</th><th>MSR-VTT</th><th>DiDeMo</th><th>ActivityNet</th><th>MSVD</th> </tr> <tr align="center"> <td>LanguageBind_Video</td><td>LoRA</td><td>Large</td><td>8</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video">Link</a></td><td>42.6</td><td>37.8</td><td>35.1</td><td>52.2</td> </tr> <tr align="center"> <td>LanguageBind_Video_FT</td><td>Full-tuning</td><td>Large</td><td>8</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video_FT">Link</a></td><td>42.7</td><td>38.1</td><td>36.9</td><td>53.5</td> </tr> <tr align="center"> <td>LanguageBind_Video_V1.5_FT</td><td>Full-tuning</td><td>Large</td><td>8</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video_V1.5_FT">Link</a></td><td>42.8</td><td>39.7</td><td>38.4</td><td>54.1</td> </tr> <tr align="center"> <td>LanguageBind_Video_V1.5_FT</td><td>Full-tuning</td><td>Large</td><td>12</td><td>Coming soon</td> </tr> <tr align="center"> <td>LanguageBind_Video_Huge_V1.5_FT</td><td>Full-tuning</td><td>Huge</td><td>8</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video_Huge_V1.5_FT">Link</a></td><td>44.8</td><td>39.9</td><td>41.0</td><td>53.7</td> </tr> <tr align="center"> <td>LanguageBind_Video_Huge_V1.5_FT</td><td>Full-tuning</td><td>Huge</td><td>12</td><td>Coming soon</td> </tr> </table> </div> ## ๐Ÿค– API **We open source all modalities preprocessing code.** If you want to load the model (e.g. ```LanguageBind/LanguageBind_Thermal```) from the model hub on Huggingface or on local, you can use the following code snippets! ### Inference for Multi-modal Binding We have provided some sample datasets in [assets](assets) to quickly see how languagebind works. ```python import torch from languagebind import LanguageBind, to_device, transform_dict, LanguageBindImageTokenizer if __name__ == '__main__': device = 'cuda:0' device = torch.device(device) clip_type = { 'video': 'LanguageBind_Video_FT', # also LanguageBind_Video 'audio': 'LanguageBind_Audio_FT', # also LanguageBind_Audio 'thermal': 'LanguageBind_Thermal', 'image': 'LanguageBind_Image', 'depth': 'LanguageBind_Depth', } model = LanguageBind(clip_type=clip_type, cache_dir='./cache_dir') model = model.to(device) model.eval() pretrained_ckpt = f'lb203/LanguageBind_Image' tokenizer = LanguageBindImageTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir/tokenizer_cache_dir') modality_transform = {c: transform_dict[c](model.modality_config[c]) for c in clip_type.keys()} image = ['assets/image/0.jpg', 'assets/image/1.jpg'] audio = ['assets/audio/0.wav', 'assets/audio/1.wav'] video = ['assets/video/0.mp4', 'assets/video/1.mp4'] depth = ['assets/depth/0.png', 'assets/depth/1.png'] thermal = ['assets/thermal/0.jpg', 'assets/thermal/1.jpg'] language = ["Training a parakeet to climb up a ladder.", 'A lion climbing a tree to catch a monkey.'] inputs = { 'image': to_device(modality_transform['image'](image), device), 'video': to_device(modality_transform['video'](video), device), 'audio': to_device(modality_transform['audio'](audio), device), 'depth': to_device(modality_transform['depth'](depth), device), 'thermal': to_device(modality_transform['thermal'](thermal), device), } inputs['language'] = to_device(tokenizer(language, max_length=77, padding='max_length', truncation=True, return_tensors='pt'), device) with torch.no_grad(): embeddings = model(inputs) print("Video x Text: \n", torch.softmax(embeddings['video'] @ embeddings['language'].T, dim=-1).detach().cpu().numpy()) print("Image x Text: \n", torch.softmax(embeddings['image'] @ embeddings['language'].T, dim=-1).detach().cpu().numpy()) print("Depth x Text: \n", torch.softmax(embeddings['depth'] @ embeddings['language'].T, dim=-1).detach().cpu().numpy()) print("Audio x Text: \n", torch.softmax(embeddings['audio'] @ embeddings['language'].T, dim=-1).detach().cpu().numpy()) print("Thermal x Text: \n", torch.softmax(embeddings['thermal'] @ embeddings['language'].T, dim=-1).detach().cpu().numpy()) ``` Then returns the following result. ```bash Video x Text: [[9.9989331e-01 1.0667283e-04] [1.3255903e-03 9.9867439e-01]] Image x Text: [[9.9990666e-01 9.3292067e-05] [4.6132666e-08 1.0000000e+00]] Depth x Text: [[0.9954276 0.00457235] [0.12042473 0.8795753 ]] Audio x Text: [[0.97634876 0.02365119] [0.02917843 0.97082156]] Thermal x Text: [[0.9482511 0.0517489 ] [0.48746133 0.5125386 ]] ``` ### Emergency zero-shot Since languagebind binds each modality together, we also found the **emergency zero-shot**. It's very simple to use. ```python print("Video x Audio: \n", torch.softmax(embeddings['video'] @ embeddings['audio'].T, dim=-1).detach().cpu().numpy()) print("Image x Depth: \n", torch.softmax(embeddings['image'] @ embeddings['depth'].T, dim=-1).detach().cpu().numpy()) print("Image x Thermal: \n", torch.softmax(embeddings['image'] @ embeddings['thermal'].T, dim=-1).detach().cpu().numpy()) ``` Then, you will get: ``` Video x Audio: [[1.0000000e+00 0.0000000e+00] [3.1150486e-32 1.0000000e+00]] Image x Depth: [[1. 0.] [0. 1.]] Image x Thermal: [[1. 0.] [0. 1.]] ``` ### Different branches for X-Language task Additionally, LanguageBind can be **disassembled into different branches** to handle different tasks. Note that we do not train Image, which just initialize from OpenCLIP. #### Thermal ```python import torch from languagebind import LanguageBindThermal, LanguageBindThermalTokenizer, LanguageBindThermalProcessor pretrained_ckpt = 'LanguageBind/LanguageBind_Thermal' model = LanguageBindThermal.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') tokenizer = LanguageBindThermalTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') thermal_process = LanguageBindThermalProcessor(model.config, tokenizer) model.eval() data = thermal_process([r"your/thermal.jpg"], ['your text'], return_tensors='pt') with torch.no_grad(): out = model(**data) print(out.text_embeds @ out.image_embeds.T) ``` #### Depth ```python import torch from languagebind import LanguageBindDepth, LanguageBindDepthTokenizer, LanguageBindDepthProcessor pretrained_ckpt = 'LanguageBind/LanguageBind_Depth' model = LanguageBindDepth.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') tokenizer = LanguageBindDepthTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') depth_process = LanguageBindDepthProcessor(model.config, tokenizer) model.eval() data = depth_process([r"your/depth.png"], ['your text.'], return_tensors='pt') with torch.no_grad(): out = model(**data) print(out.text_embeds @ out.image_embeds.T) ``` #### Video ```python import torch from languagebind import LanguageBindVideo, LanguageBindVideoTokenizer, LanguageBindVideoProcessor pretrained_ckpt = 'LanguageBind/LanguageBind_Video_FT' # also 'LanguageBind/LanguageBind_Video' model = LanguageBindVideo.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') tokenizer = LanguageBindVideoTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') video_process = LanguageBindVideoProcessor(model.config, tokenizer) model.eval() data = video_process(["your/video.mp4"], ['your text.'], return_tensors='pt') with torch.no_grad(): out = model(**data) print(out.text_embeds @ out.image_embeds.T) ``` #### Audio ```python import torch from languagebind import LanguageBindAudio, LanguageBindAudioTokenizer, LanguageBindAudioProcessor pretrained_ckpt = 'LanguageBind/LanguageBind_Audio_FT' # also 'LanguageBind/LanguageBind_Audio' model = LanguageBindAudio.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') tokenizer = LanguageBindAudioTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') audio_process = LanguageBindAudioProcessor(model.config, tokenizer) model.eval() data = audio_process([r"your/audio.wav"], ['your audio.'], return_tensors='pt') with torch.no_grad(): out = model(**data) print(out.text_embeds @ out.image_embeds.T) ``` #### Image Note that our image encoder is the same as OpenCLIP. **Not** as fine-tuned as other modalities. ```python import torch from languagebind import LanguageBindImage, LanguageBindImageTokenizer, LanguageBindImageProcessor pretrained_ckpt = 'LanguageBind/LanguageBind_Image' model = LanguageBindImage.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') tokenizer = LanguageBindImageTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') image_process = LanguageBindImageProcessor(model.config, tokenizer) model.eval() data = image_process([r"your/image.jpg"], ['your text.'], return_tensors='pt') with torch.no_grad(): out = model(**data) print(out.text_embeds @ out.image_embeds.T) ``` ## ๐Ÿ’ฅ VIDAL-10M The datasets is in [DATASETS.md](DATASETS.md). ## ๐Ÿ—๏ธ Training & Validating The training & validating instruction is in [TRAIN_AND_VALIDATE.md](TRAIN_AND_VALIDATE.md). ## ๐Ÿ‘ Acknowledgement * [OpenCLIP](https://github.com/mlfoundations/open_clip) An open source pretraining framework. * [CLIP4Clip](https://github.com/ArrowLuo/CLIP4Clip) An open source Video-Text retrieval framework. * [sRGB-TIR](https://github.com/rpmsnu/sRGB-TIR) An open source framework to generate infrared (thermal) images. * [GLPN](https://github.com/vinvino02/GLPDepth) An open source framework to generate depth images. ## ๐Ÿ”’ License * The majority of this project is released under the MIT license as found in the [LICENSE](https://github.com/PKU-YuanGroup/LanguageBind/blob/main/LICENSE) file. * The dataset of this project is released under the CC-BY-NC 4.0 license as found in the [DATASET_LICENSE](https://github.com/PKU-YuanGroup/LanguageBind/blob/main/DATASET_LICENSE) file. ## โœ๏ธ Citation If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil:. ```BibTeX @misc{zhu2023languagebind, title={LanguageBind: Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment}, author={Bin Zhu and Bin Lin and Munan Ning and Yang Yan and Jiaxi Cui and Wang HongFa and Yatian Pang and Wenhao Jiang and Junwu Zhang and Zongwei Li and Cai Wan Zhang and Zhifeng Li and Wei Liu and Li Yuan}, year={2023}, eprint={2310.01852}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ## โœจ Star History [![Star History](https://api.star-history.com/svg?repos=PKU-YuanGroup/LanguageBind&type=Date)](https://star-history.com/#PKU-YuanGroup/LanguageBind&Date) ## ๐Ÿค Contributors <a href="https://github.com/PKU-YuanGroup/LanguageBind/graphs/contributors"> <img src="https://contrib.rocks/image?repo=PKU-YuanGroup/LanguageBind" /> </a>
LanguageBind/LanguageBind_Audio
LanguageBind
2024-02-01T06:56:55Z
549
3
transformers
[ "transformers", "pytorch", "LanguageBindAudio", "zero-shot-image-classification", "arxiv:2310.01852", "license:mit", "endpoints_compatible", "region:us" ]
zero-shot-image-classification
2023-10-06T09:05:22Z
--- license: mit --- <p align="center"> <img src="https://s11.ax1x.com/2024/02/01/pFMDAm9.png" width="250" style="margin-bottom: 0.2;"/> <p> <h2 align="center"> <a href="https://arxiv.org/pdf/2310.01852.pdf">ใ€ICLR 2024 ๐Ÿ”ฅใ€‘LanguageBind: Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment</a></h2> <h5 align="center"> If you like our project, please give us a star โญ on GitHub for latest update. </h2> ## ๐Ÿ“ฐ News * **[2024.01.27]** ๐Ÿ‘€๐Ÿ‘€๐Ÿ‘€ Our [MoE-LLaVA](https://github.com/PKU-YuanGroup/MoE-LLaVA) is released! A sparse model with 3B parameters outperformed the dense model with 7B parameters. * **[2024.01.16]** ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ Our LanguageBind has been accepted at ICLR 2024! We earn the score of 6(3)8(6)6(6)6(6) [here](https://openreview.net/forum?id=QmZKc7UZCy&noteId=OgsxQxAleA). * **[2023.12.15]** ๐Ÿ’ช๐Ÿ’ช๐Ÿ’ช We expand the ๐Ÿ’ฅ๐Ÿ’ฅ๐Ÿ’ฅ VIDAL dataset and now have **10M video-text data**. We launch **LanguageBind_Video 1.5**, checking our [model zoo](#-model-zoo). * **[2023.12.10]** We expand the ๐Ÿ’ฅ๐Ÿ’ฅ๐Ÿ’ฅ VIDAL dataset and now have **10M depth and 10M thermal data**. We are in the process of uploading thermal and depth data on [Hugging Face](https://huggingface.co/datasets/LanguageBind/VIDAL-Depth-Thermal) and expect the whole process to last 1-2 months. * **[2023.11.27]** ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ We have updated our [paper](https://arxiv.org/abs/2310.01852) with emergency zero-shot results., checking our โœจ [results](#emergency-results). * **[2023.11.26]** ๐Ÿ’ฅ๐Ÿ’ฅ๐Ÿ’ฅ We have open-sourced all textual sources and corresponding YouTube IDs [here](DATASETS.md). * **[2023.11.26]** ๐Ÿ“ฃ๐Ÿ“ฃ๐Ÿ“ฃ We have open-sourced fully fine-tuned **Video & Audio**, achieving improved performance once again, checking our [model zoo](#-model-zoo). * **[2023.11.22]** We are about to release a fully fine-tuned version, and the **HUGE** version is currently undergoing training. * **[2023.11.21]** ๐Ÿ’ฅ We are releasing sample data in [DATASETS.md](DATASETS.md) so that individuals who are interested can further modify the code to train it on their own data. * **[2023.11.20]** ๐Ÿš€๐Ÿš€๐Ÿš€ [Video-LLaVA](https://github.com/PKU-YuanGroup/Video-LLaVA) builds a large visual-language model to achieve ๐ŸŽ‰SOTA performances based on LanguageBind encoders. * **[2023.10.23]** ๐ŸŽถ LanguageBind-Audio achieves ๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰**state-of-the-art (SOTA) performance on 5 datasets**, checking our โœจ [results](#multiple-modalities)! * **[2023.10.14]** ๐Ÿ˜ฑ Released a stronger LanguageBind-Video, checking our โœจ [results](#video-language)! The video checkpoint **have updated** on Huggingface Model Hub! * **[2023.10.10]** We provide sample data, which can be found in [assets](assets), and [emergency zero-shot usage](#emergency-zero-shot) is described. * **[2023.10.07]** The checkpoints are available on ๐Ÿค— [Huggingface Model](https://huggingface.co/LanguageBind). * **[2023.10.04]** Code and [demo](https://huggingface.co/spaces/LanguageBind/LanguageBind) are available now! Welcome to **watch** ๐Ÿ‘€ this repository for the latest updates. ## ๐Ÿ˜ฎ Highlights ### ๐Ÿ’ก High performance, but NO intermediate modality required LanguageBind is a **language-centric** multimodal pretraining approach, **taking the language as the bind across different modalities** because the language modality is well-explored and contains rich semantics. * The following first figure shows the architecture of LanguageBind. LanguageBind can be easily extended to segmentation, detection tasks, and potentially to unlimited modalities. ### โšก๏ธ A multimodal, fully aligned and voluminous dataset We propose **VIDAL-10M**, **10 Million data** with **V**ideo, **I**nfrared, **D**epth, **A**udio and their corresponding **L**anguage, which greatly expands the data beyond visual modalities. * The second figure shows our proposed VIDAL-10M dataset, which includes five modalities: video, infrared, depth, audio, and language. ### ๐Ÿ”ฅ Multi-view enhanced description for training We make multi-view enhancements to language. We produce multi-view description that combines **meta-data**, **spatial**, and **temporal** to greatly enhance the semantic information of the language. In addition we further **enhance the language with ChatGPT** to create a good semantic space for each modality aligned language. ## ๐Ÿค— Demo * **Local demo.** Highly recommend trying out our web demo, which incorporates all features currently supported by LanguageBind. ```bash python gradio_app.py ``` * **Online demo.** We provide the [online demo](https://huggingface.co/spaces/LanguageBind/LanguageBind) in Huggingface Spaces. In this demo, you can calculate the similarity of modalities to language, such as audio-to-language, video-to-language, and depth-to-image. ## ๐Ÿ› ๏ธ Requirements and Installation * Python >= 3.8 * Pytorch >= 1.13.1 * CUDA Version >= 11.6 * Install required packages: ```bash git clone https://github.com/PKU-YuanGroup/LanguageBind cd LanguageBind pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu116 pip install -r requirements.txt ``` ## ๐Ÿณ Model Zoo The names in the table represent different encoder models. For example, `LanguageBind/LanguageBind_Video_FT` represents the fully fine-tuned version, while `LanguageBind/LanguageBind_Video` represents the LoRA-tuned version. You can freely replace them in the recommended [API usage](#-api). We recommend using the fully fine-tuned version, as it offers stronger performance. <div align="center"> <table border="1" width="100%"> <tr align="center"> <th>Modality</th><th>LoRA tuning</th><th>Fine-tuning</th> </tr> <tr align="center"> <td>Video</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video">LanguageBind_Video</a></td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video_FT">LanguageBind_Video_FT</a></td> </tr> <tr align="center"> <td>Audio</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Audio">LanguageBind_Audio</a></td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Audio_FT">LanguageBind_Audio_FT</a></td> </tr> <tr align="center"> <td>Depth</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Depth">LanguageBind_Depth</a></td><td>-</td> </tr> <tr align="center"> <td>Thermal</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Thermal">LanguageBind_Thermal</a></td><td>-</td> </tr> </table> </div> <div align="center"> <table border="1" width="100%"> <tr align="center"> <th>Version</th><th>Tuning</th><th>Model size</th><th>Num_frames</th><th>HF Link</th><th>MSR-VTT</th><th>DiDeMo</th><th>ActivityNet</th><th>MSVD</th> </tr> <tr align="center"> <td>LanguageBind_Video</td><td>LoRA</td><td>Large</td><td>8</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video">Link</a></td><td>42.6</td><td>37.8</td><td>35.1</td><td>52.2</td> </tr> <tr align="center"> <td>LanguageBind_Video_FT</td><td>Full-tuning</td><td>Large</td><td>8</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video_FT">Link</a></td><td>42.7</td><td>38.1</td><td>36.9</td><td>53.5</td> </tr> <tr align="center"> <td>LanguageBind_Video_V1.5_FT</td><td>Full-tuning</td><td>Large</td><td>8</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video_V1.5_FT">Link</a></td><td>42.8</td><td>39.7</td><td>38.4</td><td>54.1</td> </tr> <tr align="center"> <td>LanguageBind_Video_V1.5_FT</td><td>Full-tuning</td><td>Large</td><td>12</td><td>Coming soon</td> </tr> <tr align="center"> <td>LanguageBind_Video_Huge_V1.5_FT</td><td>Full-tuning</td><td>Huge</td><td>8</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video_Huge_V1.5_FT">Link</a></td><td>44.8</td><td>39.9</td><td>41.0</td><td>53.7</td> </tr> <tr align="center"> <td>LanguageBind_Video_Huge_V1.5_FT</td><td>Full-tuning</td><td>Huge</td><td>12</td><td>Coming soon</td> </tr> </table> </div> ## ๐Ÿค– API **We open source all modalities preprocessing code.** If you want to load the model (e.g. ```LanguageBind/LanguageBind_Thermal```) from the model hub on Huggingface or on local, you can use the following code snippets! ### Inference for Multi-modal Binding We have provided some sample datasets in [assets](assets) to quickly see how languagebind works. ```python import torch from languagebind import LanguageBind, to_device, transform_dict, LanguageBindImageTokenizer if __name__ == '__main__': device = 'cuda:0' device = torch.device(device) clip_type = { 'video': 'LanguageBind_Video_FT', # also LanguageBind_Video 'audio': 'LanguageBind_Audio_FT', # also LanguageBind_Audio 'thermal': 'LanguageBind_Thermal', 'image': 'LanguageBind_Image', 'depth': 'LanguageBind_Depth', } model = LanguageBind(clip_type=clip_type, cache_dir='./cache_dir') model = model.to(device) model.eval() pretrained_ckpt = f'lb203/LanguageBind_Image' tokenizer = LanguageBindImageTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir/tokenizer_cache_dir') modality_transform = {c: transform_dict[c](model.modality_config[c]) for c in clip_type.keys()} image = ['assets/image/0.jpg', 'assets/image/1.jpg'] audio = ['assets/audio/0.wav', 'assets/audio/1.wav'] video = ['assets/video/0.mp4', 'assets/video/1.mp4'] depth = ['assets/depth/0.png', 'assets/depth/1.png'] thermal = ['assets/thermal/0.jpg', 'assets/thermal/1.jpg'] language = ["Training a parakeet to climb up a ladder.", 'A lion climbing a tree to catch a monkey.'] inputs = { 'image': to_device(modality_transform['image'](image), device), 'video': to_device(modality_transform['video'](video), device), 'audio': to_device(modality_transform['audio'](audio), device), 'depth': to_device(modality_transform['depth'](depth), device), 'thermal': to_device(modality_transform['thermal'](thermal), device), } inputs['language'] = to_device(tokenizer(language, max_length=77, padding='max_length', truncation=True, return_tensors='pt'), device) with torch.no_grad(): embeddings = model(inputs) print("Video x Text: \n", torch.softmax(embeddings['video'] @ embeddings['language'].T, dim=-1).detach().cpu().numpy()) print("Image x Text: \n", torch.softmax(embeddings['image'] @ embeddings['language'].T, dim=-1).detach().cpu().numpy()) print("Depth x Text: \n", torch.softmax(embeddings['depth'] @ embeddings['language'].T, dim=-1).detach().cpu().numpy()) print("Audio x Text: \n", torch.softmax(embeddings['audio'] @ embeddings['language'].T, dim=-1).detach().cpu().numpy()) print("Thermal x Text: \n", torch.softmax(embeddings['thermal'] @ embeddings['language'].T, dim=-1).detach().cpu().numpy()) ``` Then returns the following result. ```bash Video x Text: [[9.9989331e-01 1.0667283e-04] [1.3255903e-03 9.9867439e-01]] Image x Text: [[9.9990666e-01 9.3292067e-05] [4.6132666e-08 1.0000000e+00]] Depth x Text: [[0.9954276 0.00457235] [0.12042473 0.8795753 ]] Audio x Text: [[0.97634876 0.02365119] [0.02917843 0.97082156]] Thermal x Text: [[0.9482511 0.0517489 ] [0.48746133 0.5125386 ]] ``` ### Emergency zero-shot Since languagebind binds each modality together, we also found the **emergency zero-shot**. It's very simple to use. ```python print("Video x Audio: \n", torch.softmax(embeddings['video'] @ embeddings['audio'].T, dim=-1).detach().cpu().numpy()) print("Image x Depth: \n", torch.softmax(embeddings['image'] @ embeddings['depth'].T, dim=-1).detach().cpu().numpy()) print("Image x Thermal: \n", torch.softmax(embeddings['image'] @ embeddings['thermal'].T, dim=-1).detach().cpu().numpy()) ``` Then, you will get: ``` Video x Audio: [[1.0000000e+00 0.0000000e+00] [3.1150486e-32 1.0000000e+00]] Image x Depth: [[1. 0.] [0. 1.]] Image x Thermal: [[1. 0.] [0. 1.]] ``` ### Different branches for X-Language task Additionally, LanguageBind can be **disassembled into different branches** to handle different tasks. Note that we do not train Image, which just initialize from OpenCLIP. #### Thermal ```python import torch from languagebind import LanguageBindThermal, LanguageBindThermalTokenizer, LanguageBindThermalProcessor pretrained_ckpt = 'LanguageBind/LanguageBind_Thermal' model = LanguageBindThermal.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') tokenizer = LanguageBindThermalTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') thermal_process = LanguageBindThermalProcessor(model.config, tokenizer) model.eval() data = thermal_process([r"your/thermal.jpg"], ['your text'], return_tensors='pt') with torch.no_grad(): out = model(**data) print(out.text_embeds @ out.image_embeds.T) ``` #### Depth ```python import torch from languagebind import LanguageBindDepth, LanguageBindDepthTokenizer, LanguageBindDepthProcessor pretrained_ckpt = 'LanguageBind/LanguageBind_Depth' model = LanguageBindDepth.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') tokenizer = LanguageBindDepthTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') depth_process = LanguageBindDepthProcessor(model.config, tokenizer) model.eval() data = depth_process([r"your/depth.png"], ['your text.'], return_tensors='pt') with torch.no_grad(): out = model(**data) print(out.text_embeds @ out.image_embeds.T) ``` #### Video ```python import torch from languagebind import LanguageBindVideo, LanguageBindVideoTokenizer, LanguageBindVideoProcessor pretrained_ckpt = 'LanguageBind/LanguageBind_Video_FT' # also 'LanguageBind/LanguageBind_Video' model = LanguageBindVideo.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') tokenizer = LanguageBindVideoTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') video_process = LanguageBindVideoProcessor(model.config, tokenizer) model.eval() data = video_process(["your/video.mp4"], ['your text.'], return_tensors='pt') with torch.no_grad(): out = model(**data) print(out.text_embeds @ out.image_embeds.T) ``` #### Audio ```python import torch from languagebind import LanguageBindAudio, LanguageBindAudioTokenizer, LanguageBindAudioProcessor pretrained_ckpt = 'LanguageBind/LanguageBind_Audio_FT' # also 'LanguageBind/LanguageBind_Audio' model = LanguageBindAudio.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') tokenizer = LanguageBindAudioTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') audio_process = LanguageBindAudioProcessor(model.config, tokenizer) model.eval() data = audio_process([r"your/audio.wav"], ['your audio.'], return_tensors='pt') with torch.no_grad(): out = model(**data) print(out.text_embeds @ out.image_embeds.T) ``` #### Image Note that our image encoder is the same as OpenCLIP. **Not** as fine-tuned as other modalities. ```python import torch from languagebind import LanguageBindImage, LanguageBindImageTokenizer, LanguageBindImageProcessor pretrained_ckpt = 'LanguageBind/LanguageBind_Image' model = LanguageBindImage.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') tokenizer = LanguageBindImageTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') image_process = LanguageBindImageProcessor(model.config, tokenizer) model.eval() data = image_process([r"your/image.jpg"], ['your text.'], return_tensors='pt') with torch.no_grad(): out = model(**data) print(out.text_embeds @ out.image_embeds.T) ``` ## ๐Ÿ’ฅ VIDAL-10M The datasets is in [DATASETS.md](DATASETS.md). ## ๐Ÿ—๏ธ Training & Validating The training & validating instruction is in [TRAIN_AND_VALIDATE.md](TRAIN_AND_VALIDATE.md). ## ๐Ÿ‘ Acknowledgement * [OpenCLIP](https://github.com/mlfoundations/open_clip) An open source pretraining framework. * [CLIP4Clip](https://github.com/ArrowLuo/CLIP4Clip) An open source Video-Text retrieval framework. * [sRGB-TIR](https://github.com/rpmsnu/sRGB-TIR) An open source framework to generate infrared (thermal) images. * [GLPN](https://github.com/vinvino02/GLPDepth) An open source framework to generate depth images. ## ๐Ÿ”’ License * The majority of this project is released under the MIT license as found in the [LICENSE](https://github.com/PKU-YuanGroup/LanguageBind/blob/main/LICENSE) file. * The dataset of this project is released under the CC-BY-NC 4.0 license as found in the [DATASET_LICENSE](https://github.com/PKU-YuanGroup/LanguageBind/blob/main/DATASET_LICENSE) file. ## โœ๏ธ Citation If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil:. ```BibTeX @misc{zhu2023languagebind, title={LanguageBind: Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment}, author={Bin Zhu and Bin Lin and Munan Ning and Yang Yan and Jiaxi Cui and Wang HongFa and Yatian Pang and Wenhao Jiang and Junwu Zhang and Zongwei Li and Cai Wan Zhang and Zhifeng Li and Wei Liu and Li Yuan}, year={2023}, eprint={2310.01852}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ## โœจ Star History [![Star History](https://api.star-history.com/svg?repos=PKU-YuanGroup/LanguageBind&type=Date)](https://star-history.com/#PKU-YuanGroup/LanguageBind&Date) ## ๐Ÿค Contributors <a href="https://github.com/PKU-YuanGroup/LanguageBind/graphs/contributors"> <img src="https://contrib.rocks/image?repo=PKU-YuanGroup/LanguageBind" /> </a>
LanguageBind/LanguageBind_Audio_FT
LanguageBind
2024-02-01T06:56:41Z
5,296
1
transformers
[ "transformers", "pytorch", "LanguageBindAudio", "zero-shot-image-classification", "arxiv:2310.01852", "license:mit", "endpoints_compatible", "region:us" ]
zero-shot-image-classification
2023-11-26T07:37:41Z
--- license: mit --- <p align="center"> <img src="https://s11.ax1x.com/2024/02/01/pFMDAm9.png" width="250" style="margin-bottom: 0.2;"/> <p> <h2 align="center"> <a href="https://arxiv.org/pdf/2310.01852.pdf">ใ€ICLR 2024 ๐Ÿ”ฅใ€‘LanguageBind: Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment</a></h2> <h5 align="center"> If you like our project, please give us a star โญ on GitHub for latest update. </h2> ## ๐Ÿ“ฐ News * **[2024.01.27]** ๐Ÿ‘€๐Ÿ‘€๐Ÿ‘€ Our [MoE-LLaVA](https://github.com/PKU-YuanGroup/MoE-LLaVA) is released! A sparse model with 3B parameters outperformed the dense model with 7B parameters. * **[2024.01.16]** ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ Our LanguageBind has been accepted at ICLR 2024! We earn the score of 6(3)8(6)6(6)6(6) [here](https://openreview.net/forum?id=QmZKc7UZCy&noteId=OgsxQxAleA). * **[2023.12.15]** ๐Ÿ’ช๐Ÿ’ช๐Ÿ’ช We expand the ๐Ÿ’ฅ๐Ÿ’ฅ๐Ÿ’ฅ VIDAL dataset and now have **10M video-text data**. We launch **LanguageBind_Video 1.5**, checking our [model zoo](#-model-zoo). * **[2023.12.10]** We expand the ๐Ÿ’ฅ๐Ÿ’ฅ๐Ÿ’ฅ VIDAL dataset and now have **10M depth and 10M thermal data**. We are in the process of uploading thermal and depth data on [Hugging Face](https://huggingface.co/datasets/LanguageBind/VIDAL-Depth-Thermal) and expect the whole process to last 1-2 months. * **[2023.11.27]** ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ We have updated our [paper](https://arxiv.org/abs/2310.01852) with emergency zero-shot results., checking our โœจ [results](#emergency-results). * **[2023.11.26]** ๐Ÿ’ฅ๐Ÿ’ฅ๐Ÿ’ฅ We have open-sourced all textual sources and corresponding YouTube IDs [here](DATASETS.md). * **[2023.11.26]** ๐Ÿ“ฃ๐Ÿ“ฃ๐Ÿ“ฃ We have open-sourced fully fine-tuned **Video & Audio**, achieving improved performance once again, checking our [model zoo](#-model-zoo). * **[2023.11.22]** We are about to release a fully fine-tuned version, and the **HUGE** version is currently undergoing training. * **[2023.11.21]** ๐Ÿ’ฅ We are releasing sample data in [DATASETS.md](DATASETS.md) so that individuals who are interested can further modify the code to train it on their own data. * **[2023.11.20]** ๐Ÿš€๐Ÿš€๐Ÿš€ [Video-LLaVA](https://github.com/PKU-YuanGroup/Video-LLaVA) builds a large visual-language model to achieve ๐ŸŽ‰SOTA performances based on LanguageBind encoders. * **[2023.10.23]** ๐ŸŽถ LanguageBind-Audio achieves ๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰**state-of-the-art (SOTA) performance on 5 datasets**, checking our โœจ [results](#multiple-modalities)! * **[2023.10.14]** ๐Ÿ˜ฑ Released a stronger LanguageBind-Video, checking our โœจ [results](#video-language)! The video checkpoint **have updated** on Huggingface Model Hub! * **[2023.10.10]** We provide sample data, which can be found in [assets](assets), and [emergency zero-shot usage](#emergency-zero-shot) is described. * **[2023.10.07]** The checkpoints are available on ๐Ÿค— [Huggingface Model](https://huggingface.co/LanguageBind). * **[2023.10.04]** Code and [demo](https://huggingface.co/spaces/LanguageBind/LanguageBind) are available now! Welcome to **watch** ๐Ÿ‘€ this repository for the latest updates. ## ๐Ÿ˜ฎ Highlights ### ๐Ÿ’ก High performance, but NO intermediate modality required LanguageBind is a **language-centric** multimodal pretraining approach, **taking the language as the bind across different modalities** because the language modality is well-explored and contains rich semantics. * The following first figure shows the architecture of LanguageBind. LanguageBind can be easily extended to segmentation, detection tasks, and potentially to unlimited modalities. ### โšก๏ธ A multimodal, fully aligned and voluminous dataset We propose **VIDAL-10M**, **10 Million data** with **V**ideo, **I**nfrared, **D**epth, **A**udio and their corresponding **L**anguage, which greatly expands the data beyond visual modalities. * The second figure shows our proposed VIDAL-10M dataset, which includes five modalities: video, infrared, depth, audio, and language. ### ๐Ÿ”ฅ Multi-view enhanced description for training We make multi-view enhancements to language. We produce multi-view description that combines **meta-data**, **spatial**, and **temporal** to greatly enhance the semantic information of the language. In addition we further **enhance the language with ChatGPT** to create a good semantic space for each modality aligned language. ## ๐Ÿค— Demo * **Local demo.** Highly recommend trying out our web demo, which incorporates all features currently supported by LanguageBind. ```bash python gradio_app.py ``` * **Online demo.** We provide the [online demo](https://huggingface.co/spaces/LanguageBind/LanguageBind) in Huggingface Spaces. In this demo, you can calculate the similarity of modalities to language, such as audio-to-language, video-to-language, and depth-to-image. ## ๐Ÿ› ๏ธ Requirements and Installation * Python >= 3.8 * Pytorch >= 1.13.1 * CUDA Version >= 11.6 * Install required packages: ```bash git clone https://github.com/PKU-YuanGroup/LanguageBind cd LanguageBind pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu116 pip install -r requirements.txt ``` ## ๐Ÿณ Model Zoo The names in the table represent different encoder models. For example, `LanguageBind/LanguageBind_Video_FT` represents the fully fine-tuned version, while `LanguageBind/LanguageBind_Video` represents the LoRA-tuned version. You can freely replace them in the recommended [API usage](#-api). We recommend using the fully fine-tuned version, as it offers stronger performance. <div align="center"> <table border="1" width="100%"> <tr align="center"> <th>Modality</th><th>LoRA tuning</th><th>Fine-tuning</th> </tr> <tr align="center"> <td>Video</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video">LanguageBind_Video</a></td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video_FT">LanguageBind_Video_FT</a></td> </tr> <tr align="center"> <td>Audio</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Audio">LanguageBind_Audio</a></td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Audio_FT">LanguageBind_Audio_FT</a></td> </tr> <tr align="center"> <td>Depth</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Depth">LanguageBind_Depth</a></td><td>-</td> </tr> <tr align="center"> <td>Thermal</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Thermal">LanguageBind_Thermal</a></td><td>-</td> </tr> </table> </div> <div align="center"> <table border="1" width="100%"> <tr align="center"> <th>Version</th><th>Tuning</th><th>Model size</th><th>Num_frames</th><th>HF Link</th><th>MSR-VTT</th><th>DiDeMo</th><th>ActivityNet</th><th>MSVD</th> </tr> <tr align="center"> <td>LanguageBind_Video</td><td>LoRA</td><td>Large</td><td>8</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video">Link</a></td><td>42.6</td><td>37.8</td><td>35.1</td><td>52.2</td> </tr> <tr align="center"> <td>LanguageBind_Video_FT</td><td>Full-tuning</td><td>Large</td><td>8</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video_FT">Link</a></td><td>42.7</td><td>38.1</td><td>36.9</td><td>53.5</td> </tr> <tr align="center"> <td>LanguageBind_Video_V1.5_FT</td><td>Full-tuning</td><td>Large</td><td>8</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video_V1.5_FT">Link</a></td><td>42.8</td><td>39.7</td><td>38.4</td><td>54.1</td> </tr> <tr align="center"> <td>LanguageBind_Video_V1.5_FT</td><td>Full-tuning</td><td>Large</td><td>12</td><td>Coming soon</td> </tr> <tr align="center"> <td>LanguageBind_Video_Huge_V1.5_FT</td><td>Full-tuning</td><td>Huge</td><td>8</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video_Huge_V1.5_FT">Link</a></td><td>44.8</td><td>39.9</td><td>41.0</td><td>53.7</td> </tr> <tr align="center"> <td>LanguageBind_Video_Huge_V1.5_FT</td><td>Full-tuning</td><td>Huge</td><td>12</td><td>Coming soon</td> </tr> </table> </div> ## ๐Ÿค– API **We open source all modalities preprocessing code.** If you want to load the model (e.g. ```LanguageBind/LanguageBind_Thermal```) from the model hub on Huggingface or on local, you can use the following code snippets! ### Inference for Multi-modal Binding We have provided some sample datasets in [assets](assets) to quickly see how languagebind works. ```python import torch from languagebind import LanguageBind, to_device, transform_dict, LanguageBindImageTokenizer if __name__ == '__main__': device = 'cuda:0' device = torch.device(device) clip_type = { 'video': 'LanguageBind_Video_FT', # also LanguageBind_Video 'audio': 'LanguageBind_Audio_FT', # also LanguageBind_Audio 'thermal': 'LanguageBind_Thermal', 'image': 'LanguageBind_Image', 'depth': 'LanguageBind_Depth', } model = LanguageBind(clip_type=clip_type, cache_dir='./cache_dir') model = model.to(device) model.eval() pretrained_ckpt = f'lb203/LanguageBind_Image' tokenizer = LanguageBindImageTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir/tokenizer_cache_dir') modality_transform = {c: transform_dict[c](model.modality_config[c]) for c in clip_type.keys()} image = ['assets/image/0.jpg', 'assets/image/1.jpg'] audio = ['assets/audio/0.wav', 'assets/audio/1.wav'] video = ['assets/video/0.mp4', 'assets/video/1.mp4'] depth = ['assets/depth/0.png', 'assets/depth/1.png'] thermal = ['assets/thermal/0.jpg', 'assets/thermal/1.jpg'] language = ["Training a parakeet to climb up a ladder.", 'A lion climbing a tree to catch a monkey.'] inputs = { 'image': to_device(modality_transform['image'](image), device), 'video': to_device(modality_transform['video'](video), device), 'audio': to_device(modality_transform['audio'](audio), device), 'depth': to_device(modality_transform['depth'](depth), device), 'thermal': to_device(modality_transform['thermal'](thermal), device), } inputs['language'] = to_device(tokenizer(language, max_length=77, padding='max_length', truncation=True, return_tensors='pt'), device) with torch.no_grad(): embeddings = model(inputs) print("Video x Text: \n", torch.softmax(embeddings['video'] @ embeddings['language'].T, dim=-1).detach().cpu().numpy()) print("Image x Text: \n", torch.softmax(embeddings['image'] @ embeddings['language'].T, dim=-1).detach().cpu().numpy()) print("Depth x Text: \n", torch.softmax(embeddings['depth'] @ embeddings['language'].T, dim=-1).detach().cpu().numpy()) print("Audio x Text: \n", torch.softmax(embeddings['audio'] @ embeddings['language'].T, dim=-1).detach().cpu().numpy()) print("Thermal x Text: \n", torch.softmax(embeddings['thermal'] @ embeddings['language'].T, dim=-1).detach().cpu().numpy()) ``` Then returns the following result. ```bash Video x Text: [[9.9989331e-01 1.0667283e-04] [1.3255903e-03 9.9867439e-01]] Image x Text: [[9.9990666e-01 9.3292067e-05] [4.6132666e-08 1.0000000e+00]] Depth x Text: [[0.9954276 0.00457235] [0.12042473 0.8795753 ]] Audio x Text: [[0.97634876 0.02365119] [0.02917843 0.97082156]] Thermal x Text: [[0.9482511 0.0517489 ] [0.48746133 0.5125386 ]] ``` ### Emergency zero-shot Since languagebind binds each modality together, we also found the **emergency zero-shot**. It's very simple to use. ```python print("Video x Audio: \n", torch.softmax(embeddings['video'] @ embeddings['audio'].T, dim=-1).detach().cpu().numpy()) print("Image x Depth: \n", torch.softmax(embeddings['image'] @ embeddings['depth'].T, dim=-1).detach().cpu().numpy()) print("Image x Thermal: \n", torch.softmax(embeddings['image'] @ embeddings['thermal'].T, dim=-1).detach().cpu().numpy()) ``` Then, you will get: ``` Video x Audio: [[1.0000000e+00 0.0000000e+00] [3.1150486e-32 1.0000000e+00]] Image x Depth: [[1. 0.] [0. 1.]] Image x Thermal: [[1. 0.] [0. 1.]] ``` ### Different branches for X-Language task Additionally, LanguageBind can be **disassembled into different branches** to handle different tasks. Note that we do not train Image, which just initialize from OpenCLIP. #### Thermal ```python import torch from languagebind import LanguageBindThermal, LanguageBindThermalTokenizer, LanguageBindThermalProcessor pretrained_ckpt = 'LanguageBind/LanguageBind_Thermal' model = LanguageBindThermal.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') tokenizer = LanguageBindThermalTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') thermal_process = LanguageBindThermalProcessor(model.config, tokenizer) model.eval() data = thermal_process([r"your/thermal.jpg"], ['your text'], return_tensors='pt') with torch.no_grad(): out = model(**data) print(out.text_embeds @ out.image_embeds.T) ``` #### Depth ```python import torch from languagebind import LanguageBindDepth, LanguageBindDepthTokenizer, LanguageBindDepthProcessor pretrained_ckpt = 'LanguageBind/LanguageBind_Depth' model = LanguageBindDepth.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') tokenizer = LanguageBindDepthTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') depth_process = LanguageBindDepthProcessor(model.config, tokenizer) model.eval() data = depth_process([r"your/depth.png"], ['your text.'], return_tensors='pt') with torch.no_grad(): out = model(**data) print(out.text_embeds @ out.image_embeds.T) ``` #### Video ```python import torch from languagebind import LanguageBindVideo, LanguageBindVideoTokenizer, LanguageBindVideoProcessor pretrained_ckpt = 'LanguageBind/LanguageBind_Video_FT' # also 'LanguageBind/LanguageBind_Video' model = LanguageBindVideo.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') tokenizer = LanguageBindVideoTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') video_process = LanguageBindVideoProcessor(model.config, tokenizer) model.eval() data = video_process(["your/video.mp4"], ['your text.'], return_tensors='pt') with torch.no_grad(): out = model(**data) print(out.text_embeds @ out.image_embeds.T) ``` #### Audio ```python import torch from languagebind import LanguageBindAudio, LanguageBindAudioTokenizer, LanguageBindAudioProcessor pretrained_ckpt = 'LanguageBind/LanguageBind_Audio_FT' # also 'LanguageBind/LanguageBind_Audio' model = LanguageBindAudio.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') tokenizer = LanguageBindAudioTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') audio_process = LanguageBindAudioProcessor(model.config, tokenizer) model.eval() data = audio_process([r"your/audio.wav"], ['your audio.'], return_tensors='pt') with torch.no_grad(): out = model(**data) print(out.text_embeds @ out.image_embeds.T) ``` #### Image Note that our image encoder is the same as OpenCLIP. **Not** as fine-tuned as other modalities. ```python import torch from languagebind import LanguageBindImage, LanguageBindImageTokenizer, LanguageBindImageProcessor pretrained_ckpt = 'LanguageBind/LanguageBind_Image' model = LanguageBindImage.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') tokenizer = LanguageBindImageTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') image_process = LanguageBindImageProcessor(model.config, tokenizer) model.eval() data = image_process([r"your/image.jpg"], ['your text.'], return_tensors='pt') with torch.no_grad(): out = model(**data) print(out.text_embeds @ out.image_embeds.T) ``` ## ๐Ÿ’ฅ VIDAL-10M The datasets is in [DATASETS.md](DATASETS.md). ## ๐Ÿ—๏ธ Training & Validating The training & validating instruction is in [TRAIN_AND_VALIDATE.md](TRAIN_AND_VALIDATE.md). ## ๐Ÿ‘ Acknowledgement * [OpenCLIP](https://github.com/mlfoundations/open_clip) An open source pretraining framework. * [CLIP4Clip](https://github.com/ArrowLuo/CLIP4Clip) An open source Video-Text retrieval framework. * [sRGB-TIR](https://github.com/rpmsnu/sRGB-TIR) An open source framework to generate infrared (thermal) images. * [GLPN](https://github.com/vinvino02/GLPDepth) An open source framework to generate depth images. ## ๐Ÿ”’ License * The majority of this project is released under the MIT license as found in the [LICENSE](https://github.com/PKU-YuanGroup/LanguageBind/blob/main/LICENSE) file. * The dataset of this project is released under the CC-BY-NC 4.0 license as found in the [DATASET_LICENSE](https://github.com/PKU-YuanGroup/LanguageBind/blob/main/DATASET_LICENSE) file. ## โœ๏ธ Citation If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil:. ```BibTeX @misc{zhu2023languagebind, title={LanguageBind: Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment}, author={Bin Zhu and Bin Lin and Munan Ning and Yang Yan and Jiaxi Cui and Wang HongFa and Yatian Pang and Wenhao Jiang and Junwu Zhang and Zongwei Li and Cai Wan Zhang and Zhifeng Li and Wei Liu and Li Yuan}, year={2023}, eprint={2310.01852}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ## โœจ Star History [![Star History](https://api.star-history.com/svg?repos=PKU-YuanGroup/LanguageBind&type=Date)](https://star-history.com/#PKU-YuanGroup/LanguageBind&Date) ## ๐Ÿค Contributors <a href="https://github.com/PKU-YuanGroup/LanguageBind/graphs/contributors"> <img src="https://contrib.rocks/image?repo=PKU-YuanGroup/LanguageBind" /> </a>
nakcnx/phi-2-sql-1g10e-lora
nakcnx
2024-02-01T06:55:46Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-01T06:55:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LanguageBind/Video-LLaVA-Pretrain-7B
LanguageBind
2024-02-01T06:52:24Z
33
10
transformers
[ "transformers", "llava", "text-generation", "arxiv:2311.10122", "arxiv:2310.01852", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-11-17T05:07:08Z
--- license: apache-2.0 --- <p align="center"> <img src="https://z1.ax1x.com/2023/11/07/pil4sqH.png" width="150" style="margin-bottom: 0.2;"/> <p> <h2 align="center"> <a href="https://arxiv.org/abs/2311.10122">Video-LLaVA: Learning United Visual Representation by Alignment Before Projection</a></h2> <h5 align="center"> If you like our project, please give us a star โญ on GitHub for latest update. </h2> ## ๐Ÿ“ฐ News * **[2024.01.27]** ๐Ÿ‘€๐Ÿ‘€๐Ÿ‘€ Our [MoE-LLaVA](https://github.com/PKU-YuanGroup/MoE-LLaVA) is released! A sparse model with 3B parameters outperformed the dense model with 7B parameters. * **[2024.01.17]** ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ Our [LanguageBind](https://github.com/PKU-YuanGroup/LanguageBind) has been accepted at ICLR 2024! * **[2024.01.16]** ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ We reorganize the code and support LoRA fine-tuning, checking [finetune_lora.sh](scripts/v1_5/finetune_lora.sh). * **[2023.11.30]** ๐Ÿค Thanks to the generous contributions of the community, the [OpenXLab's demo](https://openxlab.org.cn/apps/detail/houshaowei/Video-LLaVA) is now accessible. * **[2023.11.23]** We are training a new and powerful model. * **[2023.11.21]** ๐Ÿค Check out the [replicate demo](https://replicate.com/nateraw/video-llava), created by [@nateraw](https://github.com/nateraw), who has generously supported our research! * **[2023.11.20]** ๐Ÿค— [Hugging Face demo](https://huggingface.co/spaces/LanguageBind/Video-LLaVA) and **all codes & datasets** are available now! Welcome to **watch** ๐Ÿ‘€ this repository for the latest updates. ## ๐Ÿ˜ฎ Highlights Video-LLaVA exhibits remarkable interactive capabilities between images and videos, despite the absence of image-video pairs in the dataset. ### ๐Ÿ’ก Simple baseline, learning united visual representation by alignment before projection - With **the binding of unified visual representations to the language feature space**, we enable an LLM to perform visual reasoning capabilities on both images and videos simultaneously. ### ๐Ÿ”ฅ High performance, complementary learning with video and image - Extensive experiments demonstrate **the complementarity of modalities**, showcasing significant superiority when compared to models specifically designed for either images or videos. ## ๐Ÿค— Demo ### Gradio Web UI Highly recommend trying out our web demo by the following command, which incorporates all features currently supported by Video-LLaVA. We also provide [online demo](https://huggingface.co/spaces/LanguageBind/Video-LLaVA) in Huggingface Spaces. ```bash python -m videollava.serve.gradio_web_server ``` ### CLI Inference ```bash python -m videollava.serve.cli --model-path "LanguageBind/Video-LLaVA-7B" --file "path/to/your/video.mp4" --load-4bit ``` ```bash python -m videollava.serve.cli --model-path "LanguageBind/Video-LLaVA-7B" --file "path/to/your/image.jpg" --load-4bit ``` ## ๐Ÿ› ๏ธ Requirements and Installation * Python >= 3.10 * Pytorch == 2.0.1 * CUDA Version >= 11.7 * Install required packages: ```bash git clone https://github.com/PKU-YuanGroup/Video-LLaVA cd Video-LLaVA conda create -n videollava python=3.10 -y conda activate videollava pip install --upgrade pip # enable PEP 660 support pip install -e . pip install -e ".[train]" pip install flash-attn --no-build-isolation pip install decord opencv-python git+https://github.com/facebookresearch/pytorchvideo.git@28fe037d212663c6a24f373b94cc5d478c8c1a1d ``` ## ๐Ÿค– API **We open source all codes.** If you want to load the model (e.g. ```LanguageBind/Video-LLaVA-7B```) on local, you can use the following code snippets. ### Inference for image ```python import torch from videollava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN from videollava.conversation import conv_templates, SeparatorStyle from videollava.model.builder import load_pretrained_model from videollava.utils import disable_torch_init from videollava.mm_utils import tokenizer_image_token, get_model_name_from_path, KeywordsStoppingCriteria def main(): disable_torch_init() image = 'videollava/serve/examples/extreme_ironing.jpg' inp = 'What is unusual about this image?' model_path = 'LanguageBind/Video-LLaVA-7B' cache_dir = 'cache_dir' device = 'cuda' load_4bit, load_8bit = True, False model_name = get_model_name_from_path(model_path) tokenizer, model, processor, _ = load_pretrained_model(model_path, None, model_name, load_8bit, load_4bit, device=device, cache_dir=cache_dir) image_processor = processor['image'] conv_mode = "llava_v1" conv = conv_templates[conv_mode].copy() roles = conv.roles image_tensor = image_processor.preprocess(image, return_tensors='pt')['pixel_values'] if type(image_tensor) is list: tensor = [image.to(model.device, dtype=torch.float16) for image in image_tensor] else: tensor = image_tensor.to(model.device, dtype=torch.float16) print(f"{roles[1]}: {inp}") inp = DEFAULT_IMAGE_TOKEN + '\n' + inp conv.append_message(conv.roles[0], inp) conv.append_message(conv.roles[1], None) prompt = conv.get_prompt() input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt').unsqueeze(0).cuda() stop_str = conv.sep if conv.sep_style != SeparatorStyle.TWO else conv.sep2 keywords = [stop_str] stopping_criteria = KeywordsStoppingCriteria(keywords, tokenizer, input_ids) with torch.inference_mode(): output_ids = model.generate( input_ids, images=tensor, do_sample=True, temperature=0.2, max_new_tokens=1024, use_cache=True, stopping_criteria=[stopping_criteria]) outputs = tokenizer.decode(output_ids[0, input_ids.shape[1]:]).strip() print(outputs) if __name__ == '__main__': main() ``` ### Inference for video ```python import torch from videollava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN from videollava.conversation import conv_templates, SeparatorStyle from videollava.model.builder import load_pretrained_model from videollava.utils import disable_torch_init from videollava.mm_utils import tokenizer_image_token, get_model_name_from_path, KeywordsStoppingCriteria def main(): disable_torch_init() video = 'videollava/serve/examples/sample_demo_1.mp4' inp = 'Why is this video funny?' model_path = 'LanguageBind/Video-LLaVA-7B' cache_dir = 'cache_dir' device = 'cuda' load_4bit, load_8bit = True, False model_name = get_model_name_from_path(model_path) tokenizer, model, processor, _ = load_pretrained_model(model_path, None, model_name, load_8bit, load_4bit, device=device, cache_dir=cache_dir) video_processor = processor['video'] conv_mode = "llava_v1" conv = conv_templates[conv_mode].copy() roles = conv.roles video_tensor = video_processor(video, return_tensors='pt')['pixel_values'] if type(video_tensor) is list: tensor = [video.to(model.device, dtype=torch.float16) for video in video_tensor] else: tensor = video_tensor.to(model.device, dtype=torch.float16) print(f"{roles[1]}: {inp}") inp = ' '.join([DEFAULT_IMAGE_TOKEN] * model.get_video_tower().config.num_frames) + '\n' + inp conv.append_message(conv.roles[0], inp) conv.append_message(conv.roles[1], None) prompt = conv.get_prompt() input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt').unsqueeze(0).cuda() stop_str = conv.sep if conv.sep_style != SeparatorStyle.TWO else conv.sep2 keywords = [stop_str] stopping_criteria = KeywordsStoppingCriteria(keywords, tokenizer, input_ids) with torch.inference_mode(): output_ids = model.generate( input_ids, images=tensor, do_sample=True, temperature=0.1, max_new_tokens=1024, use_cache=True, stopping_criteria=[stopping_criteria]) outputs = tokenizer.decode(output_ids[0, input_ids.shape[1]:]).strip() print(outputs) if __name__ == '__main__': main() ``` ## ๐Ÿ—๏ธ Training & Validating The training & validating instruction is in [TRAIN_AND_VALIDATE.md](TRAIN_AND_VALIDATE.md). ## ๐Ÿ‘ Acknowledgement * [LLaVA](https://github.com/haotian-liu/LLaVA) The codebase we built upon and it is an efficient large language and vision assistant. * [Video-ChatGPT](https://github.com/mbzuai-oryx/Video-ChatGPT) Great job contributing the evaluation code and dataset. ## ๐Ÿ™Œ Related Projects * [LanguageBind](https://github.com/PKU-YuanGroup/LanguageBind) An open source five modalities language-based retrieval framework. * [Chat-UniVi](https://github.com/PKU-YuanGroup/Chat-UniVi) This framework empowers the model to efficiently utilize a limited number of visual tokens. ## ๐Ÿ”’ License * The majority of this project is released under the Apache 2.0 license as found in the [LICENSE](https://github.com/PKU-YuanGroup/Video-LLaVA/blob/main/LICENSE) file. * The service is a research preview intended for non-commercial use only, subject to the model [License](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) of LLaMA, [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI, and [Privacy Practices](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb) of ShareGPT. Please contact us if you find any potential violation. ## โœ๏ธ Citation If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil:. ```BibTeX @article{lin2023video, title={Video-LLaVA: Learning United Visual Representation by Alignment Before Projection}, author={Lin, Bin and Zhu, Bin and Ye, Yang and Ning, Munan and Jin, Peng and Yuan, Li}, journal={arXiv preprint arXiv:2311.10122}, year={2023} } ``` ```BibTeX @article{zhu2023languagebind, title={LanguageBind: Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment}, author={Zhu, Bin and Lin, Bin and Ning, Munan and Yan, Yang and Cui, Jiaxi and Wang, HongFa and Pang, Yatian and Jiang, Wenhao and Zhang, Junwu and Li, Zongwei and others}, journal={arXiv preprint arXiv:2310.01852}, year={2023} } ``` <!----> ## โœจ Star History [![Star History](https://api.star-history.com/svg?repos=PKU-YuanGroup/Video-LLaVA&type=Date)](https://star-history.com/#PKU-YuanGroup/Video-LLaVA&Date) ## ๐Ÿค Contributors <a href="https://github.com/PKU-YuanGroup/Video-LLaVA/graphs/contributors"> <img src="https://contrib.rocks/image?repo=PKU-YuanGroup/Video-LLaVA" /> </a>
JKuang96/pixelcopter_v3
JKuang96
2024-02-01T06:51:02Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-02-01T06:50:56Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: pixelcopter_v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 57.70 +/- 36.28 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
JKuang96/pixelcopter
JKuang96
2024-02-01T06:47:51Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-01-31T12:04:43Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: pixelcopter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 38.30 +/- 54.66 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
worldboss/code-llama-7b-text-to-sql
worldboss
2024-02-01T06:43:09Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:codellama/CodeLlama-7b-hf", "base_model:adapter:codellama/CodeLlama-7b-hf", "license:llama2", "region:us" ]
null
2024-02-01T06:02:38Z
--- license: llama2 library_name: peft tags: - trl - sft - generated_from_trainer datasets: - generator base_model: codellama/CodeLlama-7b-hf model-index: - name: code-llama-7b-text-to-sql results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # code-llama-7b-text-to-sql This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 3 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 6 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.7.2.dev0 - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
bachbouch/w-3-qlora
bachbouch
2024-02-01T06:39:33Z
0
0
peft
[ "peft", "safetensors", "mistral", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2", "region:us" ]
null
2024-02-01T05:29:59Z
--- library_name: peft base_model: mistralai/Mistral-7B-Instruct-v0.2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.8.2.dev0
mkdir700/v2-starcoderbase1b-personal-copilot-A100-40GB-colab
mkdir700
2024-02-01T06:38:24Z
95
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt_bigcode", "text-generation", "generated_from_trainer", "base_model:bigcode/starcoderbase-1b", "base_model:finetune:bigcode/starcoderbase-1b", "license:bigcode-openrail-m", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-27T01:37:56Z
--- license: bigcode-openrail-m tags: - generated_from_trainer base_model: bigcode/starcoderbase-1b model-index: - name: v2-starcoderbase1b-personal-copilot-A100-40GB-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # v2-starcoderbase1b-personal-copilot-A100-40GB-colab This model is a fine-tuned version of [bigcode/starcoderbase-1b](https://huggingface.co/bigcode/starcoderbase-1b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 30 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.0 | 0.1 | 100 | nan | | 0.0 | 0.2 | 200 | nan | | 0.0 | 0.3 | 300 | nan | | 0.0 | 0.4 | 400 | nan | | 0.0 | 0.5 | 500 | nan | | 0.0 | 0.6 | 600 | nan | | 0.0 | 0.7 | 700 | nan | | 0.0 | 0.8 | 800 | nan | | 0.0 | 0.9 | 900 | nan | | 0.0 | 1.0 | 1000 | nan | ### Framework versions - Transformers 4.38.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
sleepyboi152/SolarMistral-10.7B
sleepyboi152
2024-02-01T06:37:35Z
0
0
null
[ "merge", "mergekit", "lazymergekit", "cognitivecomputations/dolphin-2.6-mistral-7b", "NousResearch/Nous-Hermes-2-SOLAR-10.7B", "base_model:NousResearch/Nous-Hermes-2-SOLAR-10.7B", "base_model:merge:NousResearch/Nous-Hermes-2-SOLAR-10.7B", "base_model:cognitivecomputations/dolphin-2.6-mistral-7b", "base_model:merge:cognitivecomputations/dolphin-2.6-mistral-7b", "region:us" ]
null
2024-02-01T06:22:53Z
--- tags: - merge - mergekit - lazymergekit - cognitivecomputations/dolphin-2.6-mistral-7b - NousResearch/Nous-Hermes-2-SOLAR-10.7B base_model: - cognitivecomputations/dolphin-2.6-mistral-7b - NousResearch/Nous-Hermes-2-SOLAR-10.7B --- # SolarMistral-10.7B SolarMistral-10.7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [cognitivecomputations/dolphin-2.6-mistral-7b](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b) * [NousResearch/Nous-Hermes-2-SOLAR-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B) ## ๐Ÿงฉ Configuration ```yaml slices: - sources: - model: cognitivecomputations/dolphin-2.6-mistral-7b layer_range: [0, 32] - model: NousResearch/Nous-Hermes-2-SOLAR-10.7B layer_range: [0, 32] merge_method: slerp base_model: NousResearch/Nous-Hermes-2-SOLAR-10.7B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## ๐Ÿ’ป Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "sleepyboi152/SolarMistral-10.7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
OpenDILabCommunity/TicTacToe-play-with-bot-SampledAlphaZero
OpenDILabCommunity
2024-02-01T06:30:04Z
0
0
pytorch
[ "pytorch", "deep-reinforcement-learning", "reinforcement-learning", "DI-engine", "TicTacToe-play-with-bot", "en", "arxiv:2310.08348", "license:apache-2.0", "model-index", "region:us" ]
reinforcement-learning
2024-02-01T06:29:40Z
--- language: en license: apache-2.0 library_name: pytorch tags: - deep-reinforcement-learning - reinforcement-learning - DI-engine - TicTacToe-play-with-bot benchmark_name: OpenAI/Gym/Atari task_name: TicTacToe-play-with-bot pipeline_tag: reinforcement-learning model-index: - name: SampledAlphaZero results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: TicTacToe-play-with-bot type: TicTacToe-play-with-bot metrics: - type: mean_reward value: 0.3 +/- 0.64 name: mean_reward --- # Play **TicTacToe-play-with-bot** with **SampledAlphaZero** Policy ## Model Description <!-- Provide a longer summary of what this model is. --> This implementation applies **SampledAlphaZero** to the OpenAI/Gym/Atari **TicTacToe-play-with-bot** environment using [LightZero](https://github.com/opendilab/LightZero) and [DI-engine](https://github.com/opendilab/di-engine). **LightZero** is an efficient, easy-to-understand open-source toolkit that merges Monte Carlo Tree Search (MCTS) with Deep Reinforcement Learning (RL), simplifying their integration for developers and researchers. More details are in paper [LightZero: A Unified Benchmark for Monte Carlo Tree Search in General Sequential Decision Scenarios](https://huggingface.co/papers/2310.08348). ## Model Usage ### Install the Dependencies <details close> <summary>(Click for Details)</summary> ```shell # install huggingface_ding git clone https://github.com/opendilab/huggingface_ding.git pip3 install -e ./huggingface_ding/ # install environment dependencies if needed pip3 install DI-engine[common_env,video] pip3 install LightZero ``` </details> ### Git Clone from Huggingface and Run the Model <details close> <summary>(Click for Details)</summary> ```shell # running with trained model python3 -u run.py ``` **run.py** ```python from lzero.agent import SampledAlphaZeroAgent from ding.config import Config from easydict import EasyDict import torch # Pull model from files which are git cloned from huggingface policy_state_dict = torch.load("pytorch_model.bin", map_location=torch.device("cpu")) cfg = EasyDict(Config.file_to_dict("policy_config.py").cfg_dict) # Instantiate the agent agent = SampledAlphaZeroAgent( env_id="TicTacToe-play-with-bot", exp_name="TicTacToe-play-with-bot-SampledAlphaZero", cfg=cfg.exp_config, policy_state_dict=policy_state_dict ) # Continue training agent.train(step=5000) # Render the new agent performance agent.deploy(enable_save_replay=True) ``` </details> ### Run Model by Using Huggingface_ding <details close> <summary>(Click for Details)</summary> ```shell # running with trained model python3 -u run.py ``` **run.py** ```python from lzero.agent import SampledAlphaZeroAgent from huggingface_ding import pull_model_from_hub # Pull model from Hugggingface hub policy_state_dict, cfg = pull_model_from_hub(repo_id="OpenDILabCommunity/TicTacToe-play-with-bot-SampledAlphaZero") # Instantiate the agent agent = SampledAlphaZeroAgent( env_id="TicTacToe-play-with-bot", exp_name="TicTacToe-play-with-bot-SampledAlphaZero", cfg=cfg.exp_config, policy_state_dict=policy_state_dict ) # Continue training agent.train(step=5000) # Render the new agent performance agent.deploy(enable_save_replay=True) ``` </details> ## Model Training ### Train the Model and Push to Huggingface_hub <details close> <summary>(Click for Details)</summary> ```shell #Training Your Own Agent python3 -u train.py ``` **train.py** ```python from lzero.agent import SampledAlphaZeroAgent from huggingface_ding import push_model_to_hub # Instantiate the agent agent = SampledAlphaZeroAgent(env_id="TicTacToe-play-with-bot", exp_name="TicTacToe-play-with-bot-SampledAlphaZero") # Train the agent return_ = agent.train(step=int(500000)) # Push model to huggingface hub push_model_to_hub( agent=agent.best, env_name="OpenAI/Gym/Atari", task_name="TicTacToe-play-with-bot", algo_name="SampledAlphaZero", github_repo_url="https://github.com/opendilab/LightZero", github_doc_model_url=None, github_doc_env_url=None, installation_guide=''' pip3 install DI-engine[common_env,video] pip3 install LightZero ''', usage_file_by_git_clone="./sampled_alphazero/tictactoe_play_with_bot_sampled_alphazero_deploy.py", usage_file_by_huggingface_ding="./sampled_alphazero/tictactoe_play_with_bot_sampled_alphazero_download.py", train_file="./sampled_alphazero/tictactoe_play_with_bot_sampled_alphazero.py", repo_id="OpenDILabCommunity/TicTacToe-play-with-bot-SampledAlphaZero", platform_info="[LightZero](https://github.com/opendilab/LightZero) and [DI-engine](https://github.com/opendilab/di-engine)", model_description="**LightZero** is an efficient, easy-to-understand open-source toolkit that merges Monte Carlo Tree Search (MCTS) with Deep Reinforcement Learning (RL), simplifying their integration for developers and researchers. More details are in paper [LightZero: A Unified Benchmark for Monte Carlo Tree Search in General Sequential Decision Scenarios](https://huggingface.co/papers/2310.08348).", create_repo=True ) ``` </details> **Configuration** <details close> <summary>(Click for Details)</summary> ```python exp_config = { 'main_config': { 'exp_name': 'TicTacToe-play-with-bot-SampledAlphaZero', 'seed': 0, 'env': { 'env_id': 'TicTacToe-play-with-bot', 'board_size': 3, 'battle_mode': 'play_with_bot_mode', 'bot_action_type': 'v0', 'channel_last': False, 'collector_env_num': 8, 'evaluator_env_num': 5, 'n_evaluator_episode': 5, 'manager': { 'shared_memory': False }, 'agent_vs_human': False, 'prob_random_agent': 0, 'prob_expert_agent': 0, 'scale': True, 'alphazero_mcts_ctree': False, 'save_replay_gif': False, 'replay_path_gif': './replay_gif' }, 'policy': { 'on_policy': False, 'cuda': True, 'multi_gpu': False, 'bp_update_sync': True, 'traj_len_inf': False, 'model': { 'observation_shape': [3, 3, 3], 'action_space_size': 9, 'num_res_blocks': 1, 'num_channels': 16, 'fc_value_layers': [8], 'fc_policy_layers': [8] }, 'torch_compile': False, 'tensor_float_32': False, 'sampled_algo': False, 'gumbel_algo': False, 'update_per_collect': 50, 'model_update_ratio': 0.1, 'batch_size': 256, 'optim_type': 'Adam', 'learning_rate': 0.003, 'weight_decay': 0.0001, 'momentum': 0.9, 'grad_clip_value': 0.5, 'value_weight': 1.0, 'collector_env_num': 8, 'evaluator_env_num': 5, 'lr_piecewise_constant_decay': False, 'threshold_training_steps_for_final_lr': 500000, 'manual_temperature_decay': False, 'threshold_training_steps_for_final_temperature': 100000, 'fixed_temperature_value': 0.25, 'mcts': { 'num_simulations': 25 }, 'other': { 'replay_buffer': { 'replay_buffer_size': 1000000, 'save_episode': False } }, 'cfg_type': 'AlphaZeroPolicyDict', 'mcts_ctree': False, 'simulation_env_name': 'tictactoe', 'simulation_env_config_type': 'play_with_bot', 'board_size': 3, 'entropy_weight': 0.0, 'n_episode': 8, 'eval_freq': 2000 }, 'wandb_logger': { 'gradient_logger': False, 'video_logger': False, 'plot_logger': False, 'action_logger': False, 'return_logger': False } }, 'create_config': { 'env': { 'type': 'tictactoe', 'import_names': ['zoo.board_games.tictactoe.envs.tictactoe_env'] }, 'env_manager': { 'type': 'subprocess' }, 'policy': { 'type': 'alphazero', 'import_names': ['lzero.policy.alphazero'] }, 'collector': { 'type': 'episode_alphazero', 'import_names': ['lzero.worker.alphazero_collector'] }, 'evaluator': { 'type': 'alphazero', 'import_names': ['lzero.worker.alphazero_evaluator'] } } } ``` </details> **Training Procedure** <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> - **Weights & Biases (wandb):** [monitor link](<TODO>) ## Model Information <!-- Provide the basic links for the model. --> - **Github Repository:** [repo link](https://github.com/opendilab/LightZero) - **Doc**: [Algorithm link](<TODO>) - **Configuration:** [config link](https://huggingface.co/OpenDILabCommunity/TicTacToe-play-with-bot-SampledAlphaZero/blob/main/policy_config.py) - **Demo:** [video](https://huggingface.co/OpenDILabCommunity/TicTacToe-play-with-bot-SampledAlphaZero/blob/main/replay.mp4) <!-- Provide the size information for the model. --> - **Parameters total size:** 51.13 KB - **Last Update Date:** 2024-02-01 ## Environments <!-- Address questions around what environment the model is intended to be trained and deployed at, including the necessary information needed to be provided for future users. --> - **Benchmark:** OpenAI/Gym/Atari - **Task:** TicTacToe-play-with-bot - **Gym version:** 0.25.1 - **DI-engine version:** v0.5.0 - **PyTorch version:** 2.0.1+cu117 - **Doc**: [Environments link](<TODO>)
cchoi1022/librispeech-100h-supervised_not_mine
cchoi1022
2024-02-01T06:28:03Z
118
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-02-01T06:22:48Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
cchoi1022/my-test-model
cchoi1022
2024-02-01T06:21:19Z
119
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-02-01T06:19:48Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Hui-1/speaker_recognition-campplus-zh-en-16k-base
Hui-1
2024-02-01T06:20:57Z
0
0
null
[ "CAM++", "3D-Speaker", "Speaker Recognition & Verification", "Chinese & English", "zh", "en", "dataset:3D-Speaker", "dataset:voxceleb", "dataset:cnceleb", "license:apache-2.0", "region:us" ]
null
2024-01-31T09:42:22Z
--- language: - zh - en tags: - CAM++ - 3D-Speaker - Speaker Recognition & Verification - Chinese & English license: apache-2.0 datasets: - 3D-Speaker - voxceleb - cnceleb metrics: - EER - minDCF --- # CAM++: A Fast and Efficient Speaker Recognition Network CAM++ is a fast and efficient network based on a densely-connected time delay neural network (D-TDNN). This repository provides tools for extracting speaker embeddings and performing speaker verification tasks with a pretrained CAM++ model. It has been trained on a large-scale training dataset, which includes Chinese and English corpora.
LanguageBind/MoE-LLaVA-Phi2-Pretrain
LanguageBind
2024-02-01T06:10:06Z
14
0
transformers
[ "transformers", "llava_phi", "text-generation", "custom_code", "arxiv:2401.15947", "arxiv:2311.10122", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-01-31T11:32:06Z
--- license: apache-2.0 --- <p align="center"> <img src="https://s11.ax1x.com/2023/12/28/piqvDMV.png" width="250" style="margin-bottom: 0.2;"/> <p> <h2 align="center"> <a href="https://arxiv.org/abs/2401.15947">MoE-LLaVA: Mixture of Experts for Large Vision-Language Models</a></h2> <h5 align="center"> If you like our project, please give us a star โญ on GitHub for latest update. </h2> <h5 align="center"> </h5> ## ๐Ÿ“ฐ News * **[2024.01.30]** The [paper](https://arxiv.org/abs/2401.15947) is released. * **[2024.01.27]** ๐Ÿค—[Hugging Face demo](https://huggingface.co/spaces/LanguageBind/MoE-LLaVA) and **all codes & datasets** are available now! Welcome to **watch** ๐Ÿ‘€ this repository for the latest updates. ## ๐Ÿ˜ฎ Highlights MoE-LLaVA shows excellent performance in multi-modal learning. ### ๐Ÿ”ฅ High performance, but with fewer parameters - with just **3B sparsely activated parameters**, MoE-LLaVA demonstrates performance comparable to the LLaVA-1.5-7B on various visual understanding datasets and even surpasses the LLaVA-1.5-13B in object hallucination benchmarks. ### ๐Ÿš€ Simple baseline, learning multi-modal interactions with sparse pathways. - With the addition of **a simple MoE tuning stage**, we can complete the training of MoE-LLaVA on **8 V100 GPUs** within 2 days. ## ๐Ÿค— Demo ### Gradio Web UI Highly recommend trying out our web demo by the following command, which incorporates all features currently supported by MoE-LLaVA. We also provide [online demo](https://huggingface.co/spaces/LanguageBind/MoE-LLaVA) in Huggingface Spaces. ```bash # use phi2 deepspeed --include localhost:0 moellava/serve/gradio_web_server.py --model-path "LanguageBind/MoE-LLaVA-Phi2-2.7B-4e" # use qwen deepspeed --include localhost:0 moellava/serve/gradio_web_server.py --model-path "LanguageBind/MoE-LLaVA-Qwen-1.8B-4e" # use stablelm deepspeed --include localhost:0 moellava/serve/gradio_web_server.py --model-path "LanguageBind/MoE-LLaVA-StableLM-1.6B-4e" ``` ### CLI Inference ```bash # use phi2 deepspeed --include localhost:0 moellava/serve/cli.py --model-path "LanguageBind/MoE-LLaVA-Phi2-2.7B-4e" --image-file "image.jpg" # use qwen deepspeed --include localhost:0 moellava/serve/cli.py --model-path "LanguageBind/MoE-LLaVA-Qwen-1.8B-4e" --image-file "image.jpg" # use stablelm deepspeed --include localhost:0 moellava/serve/cli.py --model-path "LanguageBind/MoE-LLaVA-StableLM-1.6B-4e" --image-file "image.jpg" ``` ## ๐Ÿณ Model Zoo | Model | LLM | Checkpoint | Avg | VQAv2 | GQA | VizWiz | SQA | T-VQA | POPE | MM-Bench| LLaVA-Bench-Wild | MM-Vet | |----------|-----------|-----------|---|---|---|---|---|---|---|---|---|---| | MoE-LLaVA-1.6Bร—4-Top2 | 1.6B | [LanguageBind/MoE-LLaVA-StableLM-1.6B-4e](https://huggingface.co/LanguageBind/MoE-LLaVA-StableLM-1.6B-4e) | 60.0 | 76.0 | 60.4 | 37.2 | 62.6 | 47.8 | 84.3 | 59.4 | 85.9 | 26.1 | | MoE-LLaVA-1.8Bร—4-Top2 | 1.8B | [LanguageBind/MoE-LLaVA-Qwen-1.8B-4e](https://huggingface.co/LanguageBind/MoE-LLaVA-Qwen-1.8B-4e) | 60.2 | 76.2 | 61.5 | 32.6 | 63.1 | 48.0 | 87.0 | 59.6 | 88.7 | 25.3 | | MoE-LLaVA-2.7Bร—4-Top2 | 2.7B | [LanguageBind/MoE-LLaVA-Phi2-2.7B-4e](https://huggingface.co/LanguageBind/MoE-LLaVA-Phi2-2.7B-4e) | 63.9 | 77.1 | 61.1 | 43.4 | 68.7 | 50.2 | 85.0 | 65.5 | 93.2 | 31.1 | <!-- | LLaVA-1.5 | 7B | [liuhaotian/llava-v1.5-7b](https://huggingface.co/liuhaotian/llava-v1.5-7b) | 62.0 | 78.5 | 62.0 | 50.0 | 66.8 | 58.2 | 85.9 | 64.3 | 31.1 | | LLaVA-1.5 | 13B | [liuhaotian/llava-v1.5-13b](https://huggingface.co/liuhaotian/llava-v1.5-13b) | 64.9 | 80.0 | 63.3 | 53.6 | 71.6 | 61.3 | 85.9 | 67.7 | 36.1 | --> ## โš™๏ธ Requirements and Installation * Python >= 3.10 * Pytorch == 2.0.1 * CUDA Version >= 11.7 * **Transformers == 4.36.2** * **Tokenizers==0.15.1** * Install required packages: ```bash git clone https://github.com/PKU-YuanGroup/MoE-LLaVA cd MoE-LLaVA conda create -n moellava python=3.10 -y conda activate moellava pip install --upgrade pip # enable PEP 660 support pip install -e . pip install -e ".[train]" pip install flash-attn --no-build-isolation # Below are optional. For Qwen model. git clone https://github.com/Dao-AILab/flash-attention cd flash-attention && pip install . # Below are optional. Installing them might be slow. # pip install csrc/layer_norm # If the version of flash-attn is higher than 2.1.1, the following is not needed. # pip install csrc/rotary ``` ## ๐Ÿ—๏ธ Training & Validating The training & validating instruction is in [TRAIN.md](docs/TRAIN.md) & [EVAL.md](docs/EVAL.md). ## ๐Ÿ’ก Customizing your MoE-LLaVA The instruction is in [CUSTOM.md](docs/CUSTOM.md). ## ๐Ÿ˜ Visualization The instruction is in [VISUALIZATION.md](docs/VISUALIZATION.md). ## ๐Ÿค– API **We open source all codes.** If you want to load the model (e.g. ```LanguageBind/MoE-LLaVA```) on local, you can use the following code snippets. **Using the following command to run the code.** ```bash deepspeed predict.py ``` ```python import torch from moellava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN from moellava.conversation import conv_templates, SeparatorStyle from moellava.model.builder import load_pretrained_model from moellava.utils import disable_torch_init from moellava.mm_utils import tokenizer_image_token, get_model_name_from_path, KeywordsStoppingCriteria def main(): disable_torch_init() image = 'moellava/serve/examples/extreme_ironing.jpg' inp = 'What is unusual about this image?' model_path = 'LanguageBind/MoE-LLaVA-Phi2-2.7B-4e' # LanguageBind/MoE-LLaVA-Qwen-1.8B-4e or LanguageBind/MoE-LLaVA-StableLM-1.6B-4e device = 'cuda' load_4bit, load_8bit = False, False # FIXME: Deepspeed support 4bit or 8bit? model_name = get_model_name_from_path(model_path) tokenizer, model, processor, context_len = load_pretrained_model(model_path, None, model_name, load_8bit, load_4bit, device=device) image_processor = processor['image'] conv_mode = "phi" # qwen or stablelm conv = conv_templates[conv_mode].copy() roles = conv.roles image_tensor = image_processor.preprocess(image, return_tensors='pt')['pixel_values'].to(model.device, dtype=torch.float16) print(f"{roles[1]}: {inp}") inp = DEFAULT_IMAGE_TOKEN + '\n' + inp conv.append_message(conv.roles[0], inp) conv.append_message(conv.roles[1], None) prompt = conv.get_prompt() input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt').unsqueeze(0).cuda() stop_str = conv.sep if conv.sep_style != SeparatorStyle.TWO else conv.sep2 keywords = [stop_str] stopping_criteria = KeywordsStoppingCriteria(keywords, tokenizer, input_ids) with torch.inference_mode(): output_ids = model.generate( input_ids, images=image_tensor, do_sample=True, temperature=0.2, max_new_tokens=1024, use_cache=True, stopping_criteria=[stopping_criteria]) outputs = tokenizer.decode(output_ids[0, input_ids.shape[1]:], skip_special_tokens=True).strip() print(outputs) if __name__ == '__main__': main() ``` ## ๐Ÿ™Œ Related Projects * [Video-LLaVA](https://github.com/PKU-YuanGroup/Video-LLaVA) This framework empowers the model to efficiently utilize the united visual tokens. * [LanguageBind](https://github.com/PKU-YuanGroup/LanguageBind) An open source five modalities language-based retrieval framework. ## ๐Ÿ‘ Acknowledgement * [LLaVA](https://github.com/haotian-liu/LLaVA) The codebase we built upon and it is an efficient large language and vision assistant. ## ๐Ÿ”’ License * The majority of this project is released under the Apache 2.0 license as found in the [LICENSE](https://github.com/PKU-YuanGroup/MoE-LLaVA/blob/main/LICENSE) file. * The service is a research preview intended for non-commercial use only, subject to the model [License](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) of LLaMA, [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI, and [Privacy Practices](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb) of ShareGPT. Please contact us if you find any potential violation. ## โœ๏ธ Citation If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil:. ```BibTeX @misc{lin2024moellava, title={MoE-LLaVA: Mixture of Experts for Large Vision-Language Models}, author={Bin Lin and Zhenyu Tang and Yang Ye and Jiaxi Cui and Bin Zhu and Peng Jin and Junwu Zhang and Munan Ning and Li Yuan}, year={2024}, eprint={2401.15947}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```BibTeX @article{lin2023video, title={Video-LLaVA: Learning United Visual Representation by Alignment Before Projection}, author={Lin, Bin and Zhu, Bin and Ye, Yang and Ning, Munan and Jin, Peng and Yuan, Li}, journal={arXiv preprint arXiv:2311.10122}, year={2023} } ``` ## โœจ Star History [![Star History](https://api.star-history.com/svg?repos=PKU-YuanGroup/MoE-LLaVA&type=Date)](https://star-history.com/#PKU-YuanGroup/MoE-LLaVA&Date) ## ๐Ÿค Contributors <a href="https://github.com/PKU-YuanGroup/MoE-LLaVA/graphs/contributors"> <img src="https://contrib.rocks/image?repo=PKU-YuanGroup/MoE-LLaVA" /> </a>
LanguageBind/MoE-LLaVA-Phi2-384-Pretrain
LanguageBind
2024-02-01T06:09:39Z
14
0
transformers
[ "transformers", "llava_phi", "text-generation", "custom_code", "arxiv:2401.15947", "arxiv:2311.10122", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-01-31T11:34:09Z
--- license: apache-2.0 --- <p align="center"> <img src="https://s11.ax1x.com/2023/12/28/piqvDMV.png" width="250" style="margin-bottom: 0.2;"/> <p> <h2 align="center"> <a href="https://arxiv.org/abs/2401.15947">MoE-LLaVA: Mixture of Experts for Large Vision-Language Models</a></h2> <h5 align="center"> If you like our project, please give us a star โญ on GitHub for latest update. </h2> <h5 align="center"> </h5> ## ๐Ÿ“ฐ News * **[2024.01.30]** The [paper](https://arxiv.org/abs/2401.15947) is released. * **[2024.01.27]** ๐Ÿค—[Hugging Face demo](https://huggingface.co/spaces/LanguageBind/MoE-LLaVA) and **all codes & datasets** are available now! Welcome to **watch** ๐Ÿ‘€ this repository for the latest updates. ## ๐Ÿ˜ฎ Highlights MoE-LLaVA shows excellent performance in multi-modal learning. ### ๐Ÿ”ฅ High performance, but with fewer parameters - with just **3B sparsely activated parameters**, MoE-LLaVA demonstrates performance comparable to the LLaVA-1.5-7B on various visual understanding datasets and even surpasses the LLaVA-1.5-13B in object hallucination benchmarks. ### ๐Ÿš€ Simple baseline, learning multi-modal interactions with sparse pathways. - With the addition of **a simple MoE tuning stage**, we can complete the training of MoE-LLaVA on **8 V100 GPUs** within 2 days. ## ๐Ÿค— Demo ### Gradio Web UI Highly recommend trying out our web demo by the following command, which incorporates all features currently supported by MoE-LLaVA. We also provide [online demo](https://huggingface.co/spaces/LanguageBind/MoE-LLaVA) in Huggingface Spaces. ```bash # use phi2 deepspeed --include localhost:0 moellava/serve/gradio_web_server.py --model-path "LanguageBind/MoE-LLaVA-Phi2-2.7B-4e" # use qwen deepspeed --include localhost:0 moellava/serve/gradio_web_server.py --model-path "LanguageBind/MoE-LLaVA-Qwen-1.8B-4e" # use stablelm deepspeed --include localhost:0 moellava/serve/gradio_web_server.py --model-path "LanguageBind/MoE-LLaVA-StableLM-1.6B-4e" ``` ### CLI Inference ```bash # use phi2 deepspeed --include localhost:0 moellava/serve/cli.py --model-path "LanguageBind/MoE-LLaVA-Phi2-2.7B-4e" --image-file "image.jpg" # use qwen deepspeed --include localhost:0 moellava/serve/cli.py --model-path "LanguageBind/MoE-LLaVA-Qwen-1.8B-4e" --image-file "image.jpg" # use stablelm deepspeed --include localhost:0 moellava/serve/cli.py --model-path "LanguageBind/MoE-LLaVA-StableLM-1.6B-4e" --image-file "image.jpg" ``` ## ๐Ÿณ Model Zoo | Model | LLM | Checkpoint | Avg | VQAv2 | GQA | VizWiz | SQA | T-VQA | POPE | MM-Bench| LLaVA-Bench-Wild | MM-Vet | |----------|-----------|-----------|---|---|---|---|---|---|---|---|---|---| | MoE-LLaVA-1.6Bร—4-Top2 | 1.6B | [LanguageBind/MoE-LLaVA-StableLM-1.6B-4e](https://huggingface.co/LanguageBind/MoE-LLaVA-StableLM-1.6B-4e) | 60.0 | 76.0 | 60.4 | 37.2 | 62.6 | 47.8 | 84.3 | 59.4 | 85.9 | 26.1 | | MoE-LLaVA-1.8Bร—4-Top2 | 1.8B | [LanguageBind/MoE-LLaVA-Qwen-1.8B-4e](https://huggingface.co/LanguageBind/MoE-LLaVA-Qwen-1.8B-4e) | 60.2 | 76.2 | 61.5 | 32.6 | 63.1 | 48.0 | 87.0 | 59.6 | 88.7 | 25.3 | | MoE-LLaVA-2.7Bร—4-Top2 | 2.7B | [LanguageBind/MoE-LLaVA-Phi2-2.7B-4e](https://huggingface.co/LanguageBind/MoE-LLaVA-Phi2-2.7B-4e) | 63.9 | 77.1 | 61.1 | 43.4 | 68.7 | 50.2 | 85.0 | 65.5 | 93.2 | 31.1 | <!-- | LLaVA-1.5 | 7B | [liuhaotian/llava-v1.5-7b](https://huggingface.co/liuhaotian/llava-v1.5-7b) | 62.0 | 78.5 | 62.0 | 50.0 | 66.8 | 58.2 | 85.9 | 64.3 | 31.1 | | LLaVA-1.5 | 13B | [liuhaotian/llava-v1.5-13b](https://huggingface.co/liuhaotian/llava-v1.5-13b) | 64.9 | 80.0 | 63.3 | 53.6 | 71.6 | 61.3 | 85.9 | 67.7 | 36.1 | --> ## โš™๏ธ Requirements and Installation * Python >= 3.10 * Pytorch == 2.0.1 * CUDA Version >= 11.7 * **Transformers == 4.36.2** * **Tokenizers==0.15.1** * Install required packages: ```bash git clone https://github.com/PKU-YuanGroup/MoE-LLaVA cd MoE-LLaVA conda create -n moellava python=3.10 -y conda activate moellava pip install --upgrade pip # enable PEP 660 support pip install -e . pip install -e ".[train]" pip install flash-attn --no-build-isolation # Below are optional. For Qwen model. git clone https://github.com/Dao-AILab/flash-attention cd flash-attention && pip install . # Below are optional. Installing them might be slow. # pip install csrc/layer_norm # If the version of flash-attn is higher than 2.1.1, the following is not needed. # pip install csrc/rotary ``` ## ๐Ÿ—๏ธ Training & Validating The training & validating instruction is in [TRAIN.md](docs/TRAIN.md) & [EVAL.md](docs/EVAL.md). ## ๐Ÿ’ก Customizing your MoE-LLaVA The instruction is in [CUSTOM.md](docs/CUSTOM.md). ## ๐Ÿ˜ Visualization The instruction is in [VISUALIZATION.md](docs/VISUALIZATION.md). ## ๐Ÿค– API **We open source all codes.** If you want to load the model (e.g. ```LanguageBind/MoE-LLaVA```) on local, you can use the following code snippets. **Using the following command to run the code.** ```bash deepspeed predict.py ``` ```python import torch from moellava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN from moellava.conversation import conv_templates, SeparatorStyle from moellava.model.builder import load_pretrained_model from moellava.utils import disable_torch_init from moellava.mm_utils import tokenizer_image_token, get_model_name_from_path, KeywordsStoppingCriteria def main(): disable_torch_init() image = 'moellava/serve/examples/extreme_ironing.jpg' inp = 'What is unusual about this image?' model_path = 'LanguageBind/MoE-LLaVA-Phi2-2.7B-4e' # LanguageBind/MoE-LLaVA-Qwen-1.8B-4e or LanguageBind/MoE-LLaVA-StableLM-1.6B-4e device = 'cuda' load_4bit, load_8bit = False, False # FIXME: Deepspeed support 4bit or 8bit? model_name = get_model_name_from_path(model_path) tokenizer, model, processor, context_len = load_pretrained_model(model_path, None, model_name, load_8bit, load_4bit, device=device) image_processor = processor['image'] conv_mode = "phi" # qwen or stablelm conv = conv_templates[conv_mode].copy() roles = conv.roles image_tensor = image_processor.preprocess(image, return_tensors='pt')['pixel_values'].to(model.device, dtype=torch.float16) print(f"{roles[1]}: {inp}") inp = DEFAULT_IMAGE_TOKEN + '\n' + inp conv.append_message(conv.roles[0], inp) conv.append_message(conv.roles[1], None) prompt = conv.get_prompt() input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt').unsqueeze(0).cuda() stop_str = conv.sep if conv.sep_style != SeparatorStyle.TWO else conv.sep2 keywords = [stop_str] stopping_criteria = KeywordsStoppingCriteria(keywords, tokenizer, input_ids) with torch.inference_mode(): output_ids = model.generate( input_ids, images=image_tensor, do_sample=True, temperature=0.2, max_new_tokens=1024, use_cache=True, stopping_criteria=[stopping_criteria]) outputs = tokenizer.decode(output_ids[0, input_ids.shape[1]:], skip_special_tokens=True).strip() print(outputs) if __name__ == '__main__': main() ``` ## ๐Ÿ™Œ Related Projects * [Video-LLaVA](https://github.com/PKU-YuanGroup/Video-LLaVA) This framework empowers the model to efficiently utilize the united visual tokens. * [LanguageBind](https://github.com/PKU-YuanGroup/LanguageBind) An open source five modalities language-based retrieval framework. ## ๐Ÿ‘ Acknowledgement * [LLaVA](https://github.com/haotian-liu/LLaVA) The codebase we built upon and it is an efficient large language and vision assistant. ## ๐Ÿ”’ License * The majority of this project is released under the Apache 2.0 license as found in the [LICENSE](https://github.com/PKU-YuanGroup/MoE-LLaVA/blob/main/LICENSE) file. * The service is a research preview intended for non-commercial use only, subject to the model [License](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) of LLaMA, [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI, and [Privacy Practices](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb) of ShareGPT. Please contact us if you find any potential violation. ## โœ๏ธ Citation If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil:. ```BibTeX @misc{lin2024moellava, title={MoE-LLaVA: Mixture of Experts for Large Vision-Language Models}, author={Bin Lin and Zhenyu Tang and Yang Ye and Jiaxi Cui and Bin Zhu and Peng Jin and Junwu Zhang and Munan Ning and Li Yuan}, year={2024}, eprint={2401.15947}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```BibTeX @article{lin2023video, title={Video-LLaVA: Learning United Visual Representation by Alignment Before Projection}, author={Lin, Bin and Zhu, Bin and Ye, Yang and Ning, Munan and Jin, Peng and Yuan, Li}, journal={arXiv preprint arXiv:2311.10122}, year={2023} } ``` ## โœจ Star History [![Star History](https://api.star-history.com/svg?repos=PKU-YuanGroup/MoE-LLaVA&type=Date)](https://star-history.com/#PKU-YuanGroup/MoE-LLaVA&Date) ## ๐Ÿค Contributors <a href="https://github.com/PKU-YuanGroup/MoE-LLaVA/graphs/contributors"> <img src="https://contrib.rocks/image?repo=PKU-YuanGroup/MoE-LLaVA" /> </a>
LanguageBind/MoE-LLaVA-Qwen-1.8B-4e
LanguageBind
2024-02-01T06:09:13Z
199
13
transformers
[ "transformers", "pytorch", "moe_llava_qwen", "text-generation", "custom_code", "arxiv:2401.15947", "arxiv:2311.10122", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-01-23T13:50:43Z
--- license: apache-2.0 --- <p align="center"> <img src="https://s11.ax1x.com/2023/12/28/piqvDMV.png" width="250" style="margin-bottom: 0.2;"/> <p> <h2 align="center"> <a href="https://arxiv.org/abs/2401.15947">MoE-LLaVA: Mixture of Experts for Large Vision-Language Models</a></h2> <h5 align="center"> If you like our project, please give us a star โญ on GitHub for latest update. </h2> <h5 align="center"> </h5> ## ๐Ÿ“ฐ News * **[2024.01.30]** The [paper](https://arxiv.org/abs/2401.15947) is released. * **[2024.01.27]** ๐Ÿค—[Hugging Face demo](https://huggingface.co/spaces/LanguageBind/MoE-LLaVA) and **all codes & datasets** are available now! Welcome to **watch** ๐Ÿ‘€ this repository for the latest updates. ## ๐Ÿ˜ฎ Highlights MoE-LLaVA shows excellent performance in multi-modal learning. ### ๐Ÿ”ฅ High performance, but with fewer parameters - with just **3B sparsely activated parameters**, MoE-LLaVA demonstrates performance comparable to the LLaVA-1.5-7B on various visual understanding datasets and even surpasses the LLaVA-1.5-13B in object hallucination benchmarks. ### ๐Ÿš€ Simple baseline, learning multi-modal interactions with sparse pathways. - With the addition of **a simple MoE tuning stage**, we can complete the training of MoE-LLaVA on **8 V100 GPUs** within 2 days. ## ๐Ÿค— Demo ### Gradio Web UI Highly recommend trying out our web demo by the following command, which incorporates all features currently supported by MoE-LLaVA. We also provide [online demo](https://huggingface.co/spaces/LanguageBind/MoE-LLaVA) in Huggingface Spaces. ```bash # use phi2 deepspeed --include localhost:0 moellava/serve/gradio_web_server.py --model-path "LanguageBind/MoE-LLaVA-Phi2-2.7B-4e" # use qwen deepspeed --include localhost:0 moellava/serve/gradio_web_server.py --model-path "LanguageBind/MoE-LLaVA-Qwen-1.8B-4e" # use stablelm deepspeed --include localhost:0 moellava/serve/gradio_web_server.py --model-path "LanguageBind/MoE-LLaVA-StableLM-1.6B-4e" ``` ### CLI Inference ```bash # use phi2 deepspeed --include localhost:0 moellava/serve/cli.py --model-path "LanguageBind/MoE-LLaVA-Phi2-2.7B-4e" --image-file "image.jpg" # use qwen deepspeed --include localhost:0 moellava/serve/cli.py --model-path "LanguageBind/MoE-LLaVA-Qwen-1.8B-4e" --image-file "image.jpg" # use stablelm deepspeed --include localhost:0 moellava/serve/cli.py --model-path "LanguageBind/MoE-LLaVA-StableLM-1.6B-4e" --image-file "image.jpg" ``` ## ๐Ÿณ Model Zoo | Model | LLM | Checkpoint | Avg | VQAv2 | GQA | VizWiz | SQA | T-VQA | POPE | MM-Bench| LLaVA-Bench-Wild | MM-Vet | |----------|-----------|-----------|---|---|---|---|---|---|---|---|---|---| | MoE-LLaVA-1.6Bร—4-Top2 | 1.6B | [LanguageBind/MoE-LLaVA-StableLM-1.6B-4e](https://huggingface.co/LanguageBind/MoE-LLaVA-StableLM-1.6B-4e) | 60.0 | 76.0 | 60.4 | 37.2 | 62.6 | 47.8 | 84.3 | 59.4 | 85.9 | 26.1 | | MoE-LLaVA-1.8Bร—4-Top2 | 1.8B | [LanguageBind/MoE-LLaVA-Qwen-1.8B-4e](https://huggingface.co/LanguageBind/MoE-LLaVA-Qwen-1.8B-4e) | 60.2 | 76.2 | 61.5 | 32.6 | 63.1 | 48.0 | 87.0 | 59.6 | 88.7 | 25.3 | | MoE-LLaVA-2.7Bร—4-Top2 | 2.7B | [LanguageBind/MoE-LLaVA-Phi2-2.7B-4e](https://huggingface.co/LanguageBind/MoE-LLaVA-Phi2-2.7B-4e) | 63.9 | 77.1 | 61.1 | 43.4 | 68.7 | 50.2 | 85.0 | 65.5 | 93.2 | 31.1 | <!-- | LLaVA-1.5 | 7B | [liuhaotian/llava-v1.5-7b](https://huggingface.co/liuhaotian/llava-v1.5-7b) | 62.0 | 78.5 | 62.0 | 50.0 | 66.8 | 58.2 | 85.9 | 64.3 | 31.1 | | LLaVA-1.5 | 13B | [liuhaotian/llava-v1.5-13b](https://huggingface.co/liuhaotian/llava-v1.5-13b) | 64.9 | 80.0 | 63.3 | 53.6 | 71.6 | 61.3 | 85.9 | 67.7 | 36.1 | --> ## โš™๏ธ Requirements and Installation * Python >= 3.10 * Pytorch == 2.0.1 * CUDA Version >= 11.7 * **Transformers == 4.36.2** * **Tokenizers==0.15.1** * Install required packages: ```bash git clone https://github.com/PKU-YuanGroup/MoE-LLaVA cd MoE-LLaVA conda create -n moellava python=3.10 -y conda activate moellava pip install --upgrade pip # enable PEP 660 support pip install -e . pip install -e ".[train]" pip install flash-attn --no-build-isolation # Below are optional. For Qwen model. git clone https://github.com/Dao-AILab/flash-attention cd flash-attention && pip install . # Below are optional. Installing them might be slow. # pip install csrc/layer_norm # If the version of flash-attn is higher than 2.1.1, the following is not needed. # pip install csrc/rotary ``` ## ๐Ÿ—๏ธ Training & Validating The training & validating instruction is in [TRAIN.md](docs/TRAIN.md) & [EVAL.md](docs/EVAL.md). ## ๐Ÿ’ก Customizing your MoE-LLaVA The instruction is in [CUSTOM.md](docs/CUSTOM.md). ## ๐Ÿ˜ Visualization The instruction is in [VISUALIZATION.md](docs/VISUALIZATION.md). ## ๐Ÿค– API **We open source all codes.** If you want to load the model (e.g. ```LanguageBind/MoE-LLaVA```) on local, you can use the following code snippets. **Using the following command to run the code.** ```bash deepspeed predict.py ``` ```python import torch from moellava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN from moellava.conversation import conv_templates, SeparatorStyle from moellava.model.builder import load_pretrained_model from moellava.utils import disable_torch_init from moellava.mm_utils import tokenizer_image_token, get_model_name_from_path, KeywordsStoppingCriteria def main(): disable_torch_init() image = 'moellava/serve/examples/extreme_ironing.jpg' inp = 'What is unusual about this image?' model_path = 'LanguageBind/MoE-LLaVA-Phi2-2.7B-4e' # LanguageBind/MoE-LLaVA-Qwen-1.8B-4e or LanguageBind/MoE-LLaVA-StableLM-1.6B-4e device = 'cuda' load_4bit, load_8bit = False, False # FIXME: Deepspeed support 4bit or 8bit? model_name = get_model_name_from_path(model_path) tokenizer, model, processor, context_len = load_pretrained_model(model_path, None, model_name, load_8bit, load_4bit, device=device) image_processor = processor['image'] conv_mode = "phi" # qwen or stablelm conv = conv_templates[conv_mode].copy() roles = conv.roles image_tensor = image_processor.preprocess(image, return_tensors='pt')['pixel_values'].to(model.device, dtype=torch.float16) print(f"{roles[1]}: {inp}") inp = DEFAULT_IMAGE_TOKEN + '\n' + inp conv.append_message(conv.roles[0], inp) conv.append_message(conv.roles[1], None) prompt = conv.get_prompt() input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt').unsqueeze(0).cuda() stop_str = conv.sep if conv.sep_style != SeparatorStyle.TWO else conv.sep2 keywords = [stop_str] stopping_criteria = KeywordsStoppingCriteria(keywords, tokenizer, input_ids) with torch.inference_mode(): output_ids = model.generate( input_ids, images=image_tensor, do_sample=True, temperature=0.2, max_new_tokens=1024, use_cache=True, stopping_criteria=[stopping_criteria]) outputs = tokenizer.decode(output_ids[0, input_ids.shape[1]:], skip_special_tokens=True).strip() print(outputs) if __name__ == '__main__': main() ``` ## ๐Ÿ™Œ Related Projects * [Video-LLaVA](https://github.com/PKU-YuanGroup/Video-LLaVA) This framework empowers the model to efficiently utilize the united visual tokens. * [LanguageBind](https://github.com/PKU-YuanGroup/LanguageBind) An open source five modalities language-based retrieval framework. ## ๐Ÿ‘ Acknowledgement * [LLaVA](https://github.com/haotian-liu/LLaVA) The codebase we built upon and it is an efficient large language and vision assistant. ## ๐Ÿ”’ License * The majority of this project is released under the Apache 2.0 license as found in the [LICENSE](https://github.com/PKU-YuanGroup/MoE-LLaVA/blob/main/LICENSE) file. * The service is a research preview intended for non-commercial use only, subject to the model [License](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) of LLaMA, [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI, and [Privacy Practices](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb) of ShareGPT. Please contact us if you find any potential violation. ## โœ๏ธ Citation If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil:. ```BibTeX @misc{lin2024moellava, title={MoE-LLaVA: Mixture of Experts for Large Vision-Language Models}, author={Bin Lin and Zhenyu Tang and Yang Ye and Jiaxi Cui and Bin Zhu and Peng Jin and Junwu Zhang and Munan Ning and Li Yuan}, year={2024}, eprint={2401.15947}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```BibTeX @article{lin2023video, title={Video-LLaVA: Learning United Visual Representation by Alignment Before Projection}, author={Lin, Bin and Zhu, Bin and Ye, Yang and Ning, Munan and Jin, Peng and Yuan, Li}, journal={arXiv preprint arXiv:2311.10122}, year={2023} } ``` ## โœจ Star History [![Star History](https://api.star-history.com/svg?repos=PKU-YuanGroup/MoE-LLaVA&type=Date)](https://star-history.com/#PKU-YuanGroup/MoE-LLaVA&Date) ## ๐Ÿค Contributors <a href="https://github.com/PKU-YuanGroup/MoE-LLaVA/graphs/contributors"> <img src="https://contrib.rocks/image?repo=PKU-YuanGroup/MoE-LLaVA" /> </a>
k-seungri/k_whisper_tokenizer
k-seungri
2024-02-01T05:58:50Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-01-30T09:15:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
arpanl/fine-tuned
arpanl
2024-02-01T05:52:31Z
176
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224", "base_model:finetune:google/vit-base-patch16-224", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-02-01T05:51:36Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224 tags: - generated_from_trainer datasets: - imagefolder model-index: - name: fine-tuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fine-tuned This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
bartowski/DistilabelBeagle14-7B-exl2
bartowski
2024-02-01T05:48:08Z
6
0
null
[ "merge", "mergekit", "lazymergekit", "dpo", "rlhf", "rlaif", "distilabel", "text-generation", "base_model:mlabonne/Beagle14-7B", "base_model:finetune:mlabonne/Beagle14-7B", "license:cc-by-nc-4.0", "region:us" ]
text-generation
2024-02-01T05:32:02Z
--- license: cc-by-nc-4.0 base_model: mlabonne/Beagle14-7B tags: - merge - mergekit - lazymergekit - dpo - rlhf - rlaif - distilabel quantized_by: bartowski pipeline_tag: text-generation --- ## Exllama v2 Quantizations of DistilabelBeagle14-7B Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.12">turboderp's ExLlamaV2 v0.0.12</a> for quantization. # The "main" branch only contains the measurement.json, download one of the other branches for the model (see below) Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. Original model: https://huggingface.co/argilla/DistilabelBeagle14-7B | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description | | ----- | ---- | ------- | ------ | ------ | ------ | ------------ | | [8_0](https://huggingface.co/Bartowski/DistilabelBeagle14-7B-exl2/tree/8_0) | 8.0 | 8.0 | 8.4 GB | 9.8 GB | 11.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. | | [6_5](https://huggingface.co/Bartowski/DistilabelBeagle14-7B-exl2/tree/6_5) | 6.5 | 8.0 | 7.2 GB | 8.6 GB | 10.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. | | [5_0](https://huggingface.co/Bartowski/DistilabelBeagle14-7B-exl2/tree/5_0) | 5.0 | 6.0 | 6.0 GB | 7.4 GB | 9.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. | | [4_25](https://huggingface.co/Bartowski/DistilabelBeagle14-7B-exl2/tree/4_25) | 4.25 | 6.0 | 5.3 GB | 6.7 GB | 8.7 GB | GPTQ equivalent bits per weight, slightly higher quality. | | [3_5](https://huggingface.co/Bartowski/DistilabelBeagle14-7B-exl2/tree/3_5) | 3.5 | 6.0 | 4.7 GB | 6.1 GB | 8.1 GB | Lower quality, only use if you have to. | ## Download instructions With git: ```shell git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/DistilabelBeagle14-7B-exl2 DistilabelBeagle14-7B-exl2-6_5 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download the `main` (only useful if you only care about measurement.json) branch to a folder called `DistilabelBeagle14-7B-exl2`: ```shell mkdir DistilabelBeagle14-7B-exl2 huggingface-cli download bartowski/DistilabelBeagle14-7B-exl2 --local-dir DistilabelBeagle14-7B-exl2 --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: Linux: ```shell mkdir DistilabelBeagle14-7B-exl2-6_5 huggingface-cli download bartowski/DistilabelBeagle14-7B-exl2 --revision 6_5 --local-dir DistilabelBeagle14-7B-exl2-6_5 --local-dir-use-symlinks False ``` Windows (which apparently doesn't like _ in folders sometimes?): ```shell mkdir DistilabelBeagle14-7B-exl2-6.5 huggingface-cli download bartowski/DistilabelBeagle14-7B-exl2 --revision 6_5 --local-dir DistilabelBeagle14-7B-exl2-6.5 --local-dir-use-symlinks False ``` Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
Tsuka25/w2v-bert-2.0-mongolian-colab-CV16.0
Tsuka25
2024-02-01T05:46:28Z
60
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2-bert", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-01-30T08:01:07Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Joshua-Abok/flan-t5-large-dialogsum_samsum_v2
Joshua-Abok
2024-02-01T05:37:00Z
92
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-01T05:30:42Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LoneStriker/DistilabelBeagle14-7B-GGUF
LoneStriker
2024-02-01T05:31:25Z
7
0
null
[ "gguf", "merge", "mergekit", "lazymergekit", "dpo", "rlhf", "rlaif", "distilabel", "arxiv:1910.09700", "base_model:mlabonne/Beagle14-7B", "base_model:quantized:mlabonne/Beagle14-7B", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-02-01T05:08:56Z
--- license: cc-by-nc-4.0 base_model: mlabonne/Beagle14-7B tags: - merge - mergekit - lazymergekit - dpo - rlhf - rlaif - distilabel --- # Model Card for Model ID This is a preference tuned version of `mlabonne/Beagle14-7B` using a mix of Argilla's orca pairs and a new upcoming multi-turn dpo dataset. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** Argilla - **Model type:** [More Information Needed] - **Language(s) (NLP):** English - **License:** cc-by-nc-4.0 - **Finetuned from model [optional]:** mlabonne/Beagle14-7B ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
cloudyu/Phoenix_DPO_60B
cloudyu
2024-02-01T05:28:31Z
111
1
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "yi", "moe", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-26T08:55:55Z
--- license: other tags: - yi - moe license_name: yi-license license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE --- this is a DPO fine-tuned MoE model with 60B parameter. ``` DPO Trainer TRL supports the DPO Trainer for training language models from preference data, as described in the paper Direct Preference Optimization: Your Language Model is Secretly a Reward Model by Rafailov et al., 2023. ``` GGUF format is ready at [cloudyu/Phoenix_DPO_60B_gguf](https://huggingface.co/cloudyu/Phoenix_DPO_60B_gguf)
vivekdugale/en_pipeline
vivekdugale
2024-02-01T05:18:14Z
2
0
spacy
[ "spacy", "token-classification", "en", "model-index", "region:us" ]
token-classification
2024-01-31T13:26:33Z
--- tags: - spacy - token-classification language: - en model-index: - name: en_pipeline results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.7554585153 - name: NER Recall type: recall value: 0.8277511962 - name: NER F Score type: f_score value: 0.7899543379 --- | Feature | Description | | --- | --- | | **Name** | `en_pipeline` | | **Version** | `0.0.0` | | **spaCy** | `>=3.7.2,<3.8.0` | | **Default Pipeline** | `transformer`, `ner` | | **Components** | `transformer`, `ner` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (1 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`ner`** | `SECTOR` | </details> ### Accuracy | Type | Score | | --- | --- | | `ENTS_F` | 79.00 | | `ENTS_P` | 75.55 | | `ENTS_R` | 82.78 | | `TRANSFORMER_LOSS` | 10044.45 | | `NER_LOSS` | 47764.82 |
mindlywork/Maskot1
mindlywork
2024-02-01T05:13:47Z
2
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:cc", "region:us" ]
text-to-image
2024-02-01T05:11:06Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: Maskot1 output: url: images/out-0 (12).png base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: Maskot1 license: cc --- # Maskot1 <Gallery /> ## Model description Maskot1 ## Trigger words You should use `Maskot1` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/dasdsff/Maskot1/tree/main) them in the Files & versions tab.
KuriT/dqn-SpaceInvadersNoFrameskip-v4
KuriT
2024-02-01T05:03:47Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-02-01T05:03:11Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 646.50 +/- 168.11 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga KuriT -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga KuriT -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga KuriT ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
octnn/ppo-SnowballTarget
octnn
2024-02-01T05:00:27Z
4
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2024-02-01T05:00:24Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog ๐Ÿถ to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: octnn/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play ๐Ÿ‘€
XiaQiang/dqn-SpaceInvadersNoFrameskip-v4
XiaQiang
2024-02-01T04:58:16Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-02-01T04:57:35Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 610.50 +/- 224.01 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga XiaQiang -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga XiaQiang -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga XiaQiang ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
dhanikitkat/topic-modelling-indo
dhanikitkat
2024-02-01T04:54:18Z
119
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "topic modelling indo", "id", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-01T04:45:08Z
--- license: mit language: - id pipeline_tag: text-classification tags: - topic modelling indo ---
jlbaker361/ddpo-stability-dcgan
jlbaker361
2024-02-01T04:51:43Z
0
1
diffusers
[ "diffusers", "safetensors", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-02-01T04:47:56Z
--- {} --- # DDPO trained model num_epochs=10 train_gradient_accumulation_steps=1 sample_num_steps=30 sample_batch_size=16 train_batch_size=16 sample_num_batches_per_epoch=32 based off of stabilityai/stable-diffusion-2-base and then trained off of None
jan-hq/stealth-rag-v1
jan-hq
2024-02-01T04:34:45Z
3
0
peft
[ "peft", "safetensors", "mistral", "alignment-handbook", "generated_from_trainer", "trl", "sft", "dataset:jan-hq/rag_dataset_1200_binarized", "dataset:jan-hq/rag_dataset_12000_binarized", "dataset:jan-hq/rag_hallucination_dataset_1000_binarized", "dataset:jan-hq/rag_full_20000_binarized", "dataset:jan-hq/bagel_sft_binarized", "dataset:jan-hq/dolphin_binarized", "dataset:jan-hq/openhermes_binarized", "base_model:jan-hq/stealth-v1.3", "base_model:adapter:jan-hq/stealth-v1.3", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
2024-02-01T02:21:12Z
--- license: apache-2.0 library_name: peft tags: - alignment-handbook - generated_from_trainer - trl - sft - generated_from_trainer datasets: - jan-hq/rag_dataset_1200_binarized - jan-hq/rag_dataset_12000_binarized - jan-hq/rag_hallucination_dataset_1000_binarized - jan-hq/rag_full_20000_binarized - jan-hq/bagel_sft_binarized - jan-hq/dolphin_binarized - jan-hq/openhermes_binarized base_model: jan-hq/stealth-v1.3 model-index: - name: stealth-rag-v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # stealth-rag-v1 This model is a fine-tuned version of [jan-hq/stealth-v1.3](https://huggingface.co/jan-hq/stealth-v1.3) on the jan-hq/rag_dataset_1200_binarized, the jan-hq/rag_dataset_12000_binarized, the jan-hq/rag_hallucination_dataset_1000_binarized, the jan-hq/rag_full_20000_binarized, the jan-hq/bagel_sft_binarized, the jan-hq/dolphin_binarized and the jan-hq/openhermes_binarized datasets. It achieves the following results on the evaluation set: - Loss: 1.3883 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.0
andysalerno/rainbowfish-v2
andysalerno
2024-02-01T04:22:03Z
6
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-01T04:09:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
cloudyu/Mixtral_7Bx4_MOE_DPO
cloudyu
2024-02-01T04:19:02Z
15
1
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-18T06:40:57Z
--- license: cc-by-nc-4.0 --- * [This is DPO improved version of cloudyu/Mixtral_7Bx4_MOE_24B](https://huggingface.co/cloudyu/Mixtral_7Bx4_MOE_24B) * [DPO Trainer](https://huggingface.co/docs/trl/main/en/dpo_trainer) * Metrics improved by DPO ![Metrsc improment](dpo.jpg) ![Metrsc improment](dpo-metrics.jpg)
devjwsong/q-FrozenLake-v1-4x4-noSlippery
devjwsong
2024-02-01T04:12:58Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-02-01T04:12:56Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="devjwsong/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
zhaoxinwind/QA_model
zhaoxinwind
2024-02-01T03:52:04Z
111
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "question-answering", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2024-01-29T11:54:53Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: QA_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # QA_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0017 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.0855 | 1.0 | 1202 | 0.0812 | | 0.0312 | 2.0 | 2404 | 0.0017 | | 0.0005 | 3.0 | 3606 | 0.0017 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.2+cpu - Datasets 2.16.1 - Tokenizers 0.15.1
octnn/Reinforce-Pixelcopter-PLE-v0
octnn
2024-02-01T03:51:01Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-01-31T03:08:08Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 24.00 +/- 20.29 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
alpindale/miquella-120b-gguf
alpindale
2024-02-01T03:50:53Z
49
17
null
[ "gguf", "merge", "endpoints_compatible", "region:us" ]
null
2024-02-01T02:00:08Z
--- tags: - merge --- # Miquella 120B GGUF GGUF quantized weights for [miquella-120b](https://huggingface.co/alpindale/miquella-120b). Contains *all* quants. I used Importance Matrices generated from Q8_0 quant of the model. The dataset used for that was random junk for optimal quality. Due to the limitations of HF's file size, the larger files were split into multiple chunks. Instructions below. ## Linux Example uses Q3_K_L. Replace the names appropriately for your quant of choice. ```sh cat miquella-120b.Q3_K_L.gguf_part_* > miquella-120b.Q3_K_L.gguf && rm miquella-120b.Q3_K_L.gguf_part_* ``` ## Windows Example uses Q3_K_L. Replace the names appropriately for your quant of choice. ```sh COPY /B miquella-120b.Q3_K_L.gguf_part_aa + miquella-120b.Q3_K_L.gguf_part_ab miquella-120b.Q3_K_L.gguf ``` Then delete the two splits.
whythoname/TheWondermixV11
whythoname
2024-02-01T03:39:12Z
1
0
diffusers
[ "diffusers", "region:us" ]
null
2024-02-01T03:15:59Z
--- library_name: diffusers ---
tr-aravindan/output_emotion
tr-aravindan
2024-02-01T03:28:35Z
2
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:emotion", "base_model:openai-community/gpt2", "base_model:adapter:openai-community/gpt2", "license:mit", "region:us" ]
null
2024-01-23T08:20:28Z
--- license: mit library_name: peft tags: - trl - sft - generated_from_trainer datasets: - emotion base_model: gpt2 model-index: - name: output_emotion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output_emotion This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 7.0852 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 8.5945 | 2.0 | 1800 | 7.5670 | | 7.7948 | 4.0 | 3600 | 7.0852 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.0.0 - Datasets 2.15.0 - Tokenizers 0.15.0
tr-aravindan/output
tr-aravindan
2024-02-01T03:28:35Z
3
0
peft
[ "peft", "safetensors", "generated_from_trainer", "dataset:emotion", "base_model:openai-community/gpt2", "base_model:adapter:openai-community/gpt2", "license:mit", "region:us" ]
null
2024-01-20T21:31:04Z
--- license: mit library_name: peft tags: - generated_from_trainer datasets: - emotion base_model: gpt2 model-index: - name: output results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 3.8303 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.25 | 250 | 4.8185 | | No log | 0.5 | 500 | 4.3814 | | No log | 0.75 | 750 | 4.1230 | | No log | 1.0 | 1000 | 4.0088 | | No log | 1.25 | 1250 | 3.9536 | | No log | 1.5 | 1500 | 3.9208 | | No log | 1.75 | 1750 | 3.8946 | | 4.644 | 2.0 | 2000 | 3.8799 | | 4.644 | 2.25 | 2250 | 3.8651 | | 4.644 | 2.5 | 2500 | 3.8552 | | 4.644 | 2.75 | 2750 | 3.8464 | | 4.644 | 3.0 | 3000 | 3.8399 | | 4.644 | 3.25 | 3250 | 3.8364 | | 4.644 | 3.5 | 3500 | 3.8333 | | 4.644 | 3.75 | 3750 | 3.8311 | | 4.0742 | 4.0 | 4000 | 3.8303 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.1.2 - Datasets 2.15.0 - Tokenizers 0.15.1
AIFT/AIFT-instruct-dpo-v1.3-42dot_LLM-SFT-1.3B
AIFT
2024-02-01T03:25:12Z
278
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-01T01:24:11Z
--- license: cc-by-sa-4.0 --- <h1>AIFT-instruct-42dot_LLM-SFT-1.3B</h1> <b><ํ•™์Šต ๋ฐ์ดํ„ฐ ๊ตฌ์ถ•></b> <br> kyujinpy ๋‹˜์ด ๊ณต๊ฐœํ•˜์‹  KOR-OpenOrca-Platypus ๋ฐ์ดํ„ฐ๋ฅผ ์ผ๋ถ€ ์‚ญ์ œ(์ƒ˜ํ”Œ๋ง) ๋ฐ ์ •์ œ ์ž‘์—… ์ง„ํ–‰ํ•˜์—ฌ ํ™œ์šฉ. ๊ทธ ์ดํ›„ ํ•ด๋‹น ๋ฐ์ดํ„ฐ๋“ค์„ ๋ณด๋ฉฐ ๊ด€๋ จ ํƒœ์Šคํฌ๋ฅผ ์ถ”์ถœํ•˜์˜€๊ณ  ์ด๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•ด๋‹น ํƒœ์Šคํฌ์— ๋งž์ถฐ์„œ NLP ๊ด€๋ จ ์˜คํ”ˆ์†Œ์Šค ๋ฐ์ดํ„ฐ๋ฅผ ํ™œ์šฉํ•˜์—ฌ ํ•™์Šต๋ฐ์ดํ„ฐ๋ฅผ ์ž์ฒด์ ์œผ๋กœ ์—ญ์‚ฌ, ๊ณผํ•™, ์ˆ˜ํ•™, ๊ธฐ๊ณ„๋…ํ•ด, ๋ฆฌ๋ทฐ ๋ถ„์„ ๋ฌธ์ œ๋ฅผ gpt๋ฅผ ํ†ตํ•ด์„œ ๊ตฌ์ถ•ํ•˜์˜€๊ณ , aihub ์ผ๋ฐ˜์ƒ์‹ ๋ฐ ๊ธฐ๊ณ„๋…ํ•ด ๋ฐ์ดํ„ฐ๋ฅผ ํ™œ์šฉํ•˜์—ฌ ์ถ”๊ฐ€๋กœ ํ•™์Šต ๋ฐ์ดํ„ฐ๋ฅผ ๊ตฌ์ถ•(ํ˜•ํƒœ์†Œ ๊ด€๋ จ, ๊ธฐ๊ณ„๋…ํ•ด ๊ด€๋ จ ๋ฐ ์š”์•ฝ) ๊ฐ์ข… ๋ธ”๋กœ๊ทธ์—์„œ ์—ญ์‚ฌ ๋ฐ ์ƒ์‹ ํ€ด์ฆˆ๋ฅผ ์‚ฌ๋žŒ์ด ์ง์ ‘ ํ•™์Šต๋ฐ์ดํ„ฐ ํ˜•ํƒœ๋กœ ๋ณ€๊ฒฝ AI2AI Challenge ๋ฐ์ดํ„ฐ ํ˜•ํƒœ๋ฅผ ๋ณด๊ณ  gpt๋ฅผ ํ†ตํ•ด ์ดˆ๋“ฑ ์ˆ˜์ค€์˜ ๊ณผํ•™ ์ˆ˜ํ•™ ๋ฌธ์ œ ์œ ํ˜•์„ ์ œ์ž‘ 500๋ฌธ์ œ ์˜์–ด ๋ฒˆ์—ญ ๋ฐ์ดํ„ฐ ์˜ํ•œ/ํ•œ์˜ ๋ฐ์ดํ„ฐ ํ•™์Šต ๋ฐ์ดํ„ฐ๋กœ ํ™œ์šฉ ์ง„ํ–‰ ์ด ๋ฐ์ดํ„ฐ 4๋งŒ๊ฐœ ์ •๋„ ์‚ฌ์šฉํ•˜์˜€์Šต๋‹ˆ๋‹ค. <br> <br> + TruthfulQA ๊ด€๋ จ ๋ฌธ์ œ ์ถ”๊ฐ€๋ฅผ ์ง„ํ–‰ํ•˜์˜€์Šต๋‹ˆ๋‹ค.(์†์„ค ๊ด€๋ จ ์ฐธ๊ฑฐ์ง“ ๋ฌธ์ œ) + ๊ธฐ๊ณ„๋…ํ•ด ๊ด€๋ จ ํ•™์Šต ๋ฐ์ดํ„ฐ๋ฅผ ChatGPT๋ฅผ ํ†ตํ•ด์„œ ๋‹ต๋ณ€์„ ์–ป์–ด ํ•™์Šต + ๋ฌธ๋ฒ•๊ด€๋ จ ํ•™์Šต ๋ฐ์ดํ„ฐ <br> <br> DPO ๋ฐ์ดํ„ฐ์…‹ <br> ko-HH-RLHF ๋ฐ์ดํ„ฐ์˜ chosen ๋ฐ์ดํ„ฐ๋ฅผ gpt-3.5-turbo๋ฅผ ํ†ตํ•ด ์žฌ ์ƒ์„ฑํ•˜์—ฌ ํ•™์Šต์— ํ™œ์šฉํ•˜์˜€์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ TruthfulQA ๋ฐ์ดํ„ฐ ์•ฝ 1200๊ฑด์€ ์ž์ฒด ์ œ์ž‘ํ•˜์˜€์Šต๋‹ˆ๋‹ค. <br> ###ํ•™์Šต ๋ฐ์ดํ„ฐ ํŒŒ์ผ์€ ๋น„๊ณต๊ฐœ์ž…๋‹ˆ๋‹ค. <br> <๋ชจ๋ธ> <br> 42dot์—์„œ ๊ณต๊ฐœํ•œ 42dot_LLM-SFT-1.3B์„ ๋ฒ ์ด์Šค ๋ชจ๋ธ๋กœ ํ•˜์—ฌ ํ•™์Šต ์ง„ํ–‰ํ•˜์˜€์Šต๋‹ˆ๋‹ค. <br> <br> <br> <b><ํ•™์Šต></b> <br> ํ•™์Šต์€ LoRA๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ A100 40G *2์—์„œ ํ•™์Šต์„ ์ง„ํ–‰ํ•˜์˜€์Šต๋‹ˆ๋‹ค.
erik1126/Enet
erik1126
2024-02-01T03:24:11Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-01-23T08:16:36Z
--- license: apache-2.0 --- # Model for signature verification Don't use it in production. This is just for an experiment.
DataGenius/MyLinkedlnProPic
DataGenius
2024-02-01T03:19:57Z
1
1
diffusers
[ "diffusers", "text-to-image", "autotrain", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
text-to-image
2024-02-01T03:19:01Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: Hazz photos tags: - text-to-image - diffusers - autotrain inference: true --- # DreamBooth trained by AutoTrain Text encoder was not trained.
YaHi/teacher_electra_small_building_binary
YaHi
2024-02-01T03:09:56Z
89
0
transformers
[ "transformers", "pytorch", "electra", "text-classification", "generated_from_trainer", "base_model:google/electra-small-discriminator", "base_model:finetune:google/electra-small-discriminator", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-19T02:08:36Z
--- license: apache-2.0 base_model: google/electra-small-discriminator tags: - generated_from_trainer model-index: - name: teacher_electra_small_building_binary results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # teacher_electra_small_building_binary This model is a fine-tuned version of [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 1022 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 4 ### Training results ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0 - Datasets 2.12.0 - Tokenizers 0.13.2
chathuranga-jayanath/codet5-small-v11
chathuranga-jayanath
2024-02-01T03:00:57Z
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:Salesforce/codet5-small", "base_model:finetune:Salesforce/codet5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-01-31T18:09:43Z
--- license: apache-2.0 base_model: Salesforce/codet5-small tags: - generated_from_trainer model-index: - name: codet5-small-v11 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # codet5-small-v11 This model is a fine-tuned version of [Salesforce/codet5-small](https://huggingface.co/Salesforce/codet5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1410 - Bleu Score: 0.0004 - Gen Len: 13.1271 ## Model description Trained: - on: chathuranga-jayanath/formatted-selfapr-train-data - prompt: selfapr train data format -> [BUG]...[CONTEXT]...[CLASS]...[ERROR] <fe> <ce> ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 30 - eval_batch_size: 30 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu Score | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:----------:|:-------:| | 0.2246 | 1.0 | 10656 | 0.1778 | 0.0004 | 13.1185 | | 0.1915 | 2.0 | 21312 | 0.1489 | 0.0004 | 13.1311 | | 0.1623 | 3.0 | 31968 | 0.1410 | 0.0004 | 13.1271 | ### Framework versions - Transformers 4.38.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
Anguuuuus/enhanced_german
Anguuuuus
2024-02-01T02:54:49Z
146
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "audio-classification", "generated_from_trainer", "base_model:facebook/wav2vec2-base", "base_model:finetune:facebook/wav2vec2-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2024-01-29T05:57:45Z
--- license: apache-2.0 base_model: facebook/wav2vec2-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: enhanced_german results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # enhanced_german This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6891 - Accuracy: 0.6667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 1 | 0.7047 | 0.3333 | | No log | 2.0 | 2 | 0.6962 | 0.5 | | No log | 3.0 | 3 | 0.6878 | 0.5833 | | No log | 4.0 | 4 | 0.6962 | 0.6667 | | No log | 5.0 | 5 | 0.6954 | 0.6667 | | No log | 6.0 | 6 | 0.6951 | 0.5833 | | No log | 7.0 | 7 | 0.6922 | 0.5833 | | No log | 8.0 | 8 | 0.6907 | 0.5833 | | No log | 9.0 | 9 | 0.6899 | 0.5833 | | 0.3093 | 10.0 | 10 | 0.6891 | 0.6667 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
cloudyu/Yi-34Bx2-MoE-60B-GGUF
cloudyu
2024-02-01T02:49:08Z
16
4
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-02-01T02:22:00Z
--- tags: - gguf --- ## Description This repo contains GGUF format model files for [cloudyu/Yi-34Bx2-MoE-60B](https://huggingface.co/cloudyu/Yi-34Bx2-MoE-60B). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. ### How to run GGUF with llama.cpp on an A10 (24G vram) ``` git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp/ make LLAMA_CUBLAS=1 ./main --model ./cloudyu_Yi-34Bx2-MoE-60B_Q3_K_XS.gguf -p "what is biggest animal?" -i -ngl 36 ```
alk/phi2-dolly-sum-finetune
alk
2024-02-01T02:46:01Z
3
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:microsoft/phi-2", "base_model:adapter:microsoft/phi-2", "license:mit", "region:us" ]
null
2024-02-01T02:45:56Z
--- license: mit library_name: peft tags: - generated_from_trainer base_model: microsoft/phi-2 model-index: - name: phi2-dolly-sum-finetune results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phi2-dolly-sum-finetune This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.0505 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.9329 | 0.05 | 25 | 2.4178 | | 2.4832 | 0.09 | 50 | 2.1541 | | 2.1688 | 0.14 | 75 | 2.0774 | | 2.2247 | 0.18 | 100 | 2.0725 | | 2.225 | 0.23 | 125 | 2.0652 | | 2.2217 | 0.27 | 150 | 2.0635 | | 2.2282 | 0.32 | 175 | 2.0611 | | 2.1104 | 0.37 | 200 | 2.0608 | | 2.1583 | 0.41 | 225 | 2.0569 | | 2.1197 | 0.46 | 250 | 2.0565 | | 2.1257 | 0.5 | 275 | 2.0559 | | 2.0018 | 0.55 | 300 | 2.0512 | | 2.0203 | 0.6 | 325 | 2.0546 | | 2.1332 | 0.64 | 350 | 2.0519 | | 2.1585 | 0.69 | 375 | 2.0503 | | 2.1287 | 0.73 | 400 | 2.0510 | | 2.1431 | 0.78 | 425 | 2.0515 | | 2.1601 | 0.82 | 450 | 2.0522 | | 2.088 | 0.87 | 475 | 2.0481 | | 2.0462 | 0.92 | 500 | 2.0505 | ### Framework versions - PEFT 0.8.1 - Transformers 4.37.2 - Pytorch 2.2.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
devjwsong/ppo-Huggy
devjwsong
2024-02-01T02:38:36Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2024-02-01T02:38:31Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog ๐Ÿถ to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: devjwsong/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play ๐Ÿ‘€
son-of-man/HoloViolet-7B-test4
son-of-man
2024-02-01T02:36:38Z
6
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "KoboldAI/Mistral-7B-Holodeck-1", "FPHam/Sydney_Pirate_Mistral_7b", "base_model:FPHam/Sydney_Pirate_Mistral_7b", "base_model:merge:FPHam/Sydney_Pirate_Mistral_7b", "base_model:KoboldAI/Mistral-7B-Holodeck-1", "base_model:merge:KoboldAI/Mistral-7B-Holodeck-1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-01T02:33:05Z
--- tags: - merge - mergekit - lazymergekit - KoboldAI/Mistral-7B-Holodeck-1 - FPHam/Sydney_Pirate_Mistral_7b base_model: - KoboldAI/Mistral-7B-Holodeck-1 - FPHam/Sydney_Pirate_Mistral_7b --- # HoloViolet-7B-test4 HoloViolet-7B-test4 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [KoboldAI/Mistral-7B-Holodeck-1](https://huggingface.co/KoboldAI/Mistral-7B-Holodeck-1) * [FPHam/Sydney_Pirate_Mistral_7b](https://huggingface.co/FPHam/Sydney_Pirate_Mistral_7b) ## ๐Ÿงฉ Configuration ```yaml merge_method: dare_ties base_model: GreenNode/GreenNode-mini-7B-multilingual-v1olet models: - model: GreenNode/GreenNode-mini-7B-multilingual-v1olet - model: KoboldAI/Mistral-7B-Holodeck-1 parameters: weight: 0.42 - model: FPHam/Sydney_Pirate_Mistral_7b parameters: weight: 0.21 dtype: bfloat16 ``` ## ๐Ÿ’ป Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "son-of-man/HoloViolet-7B-test4" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
jlbaker361/dcgan-gpu-wikiart25
jlbaker361
2024-02-01T02:33:59Z
0
0
null
[ "region:us" ]
null
2024-01-25T06:24:49Z
--- {} --- Creative Adversarial Network epochs: 3 dataset jlbaker361/wikiart-balanced25 n classes 27 batch_size 4 images where resized to 768 and then center cropped to: 512 used clip=False discriminator parameters: init_dim: 32 final_dim 512 generator parameters: input noise_dim: 100
omartariq612/quran-whisper-large-v3-epoch-2
omartariq612
2024-02-01T02:25:07Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-01T02:25:03Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
rhplus0831/maid-yuzu-v2-alter
rhplus0831
2024-02-01T02:19:34Z
5
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "mergekit", "merge", "base_model:smelborp/MixtralOrochi8x7B", "base_model:merge:smelborp/MixtralOrochi8x7B", "base_model:ycros/BagelMIsteryTour-v2-8x7B", "base_model:merge:ycros/BagelMIsteryTour-v2-8x7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-01T01:52:19Z
--- base_model: - smelborp/MixtralOrochi8x7B - ycros/BagelMIsteryTour-v2-8x7B tags: - mergekit - merge --- # maid-yuzu-v2-alter This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). This model is an experiment to combine two models I liked. ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [smelborp/MixtralOrochi8x7B](https://huggingface.co/smelborp/MixtralOrochi8x7B) * [ycros/BagelMIsteryTour-v2-8x7B](https://huggingface.co/ycros/BagelMIsteryTour-v2-8x7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: model: path: smelborp/MixtralOrochi8x7B dtype: bfloat16 merge_method: slerp parameters: t: - value: 0.5 slices: - sources: - layer_range: [0, 32] model: model: path: smelborp/MixtralOrochi8x7B - layer_range: [0, 32] model: model: path: ycros/BagelMIsteryTour-v2-8x7B ```
blzncz/segformer-finetuned-4ss1st3r_s3gs3m_24Jan_negro-10k-steps
blzncz
2024-02-01T02:09:47Z
178
0
transformers
[ "transformers", "pytorch", "tensorboard", "segformer", "image-segmentation", "vision", "generated_from_trainer", "license:other", "endpoints_compatible", "region:us" ]
image-segmentation
2024-01-31T10:49:13Z
--- license: other tags: - image-segmentation - vision - generated_from_trainer model-index: - name: segformer-finetuned-4ss1st3r_s3gs3m_24Jan_negro-10k-steps results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # segformer-finetuned-4ss1st3r_s3gs3m_24Jan_negro-10k-steps This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the blzncz/4ss1st3r_s3gs3m_24Jan_negro dataset. It achieves the following results on the evaluation set: - Loss: 0.2142 - Mean Iou: 0.5724 - Mean Accuracy: 0.7571 - Overall Accuracy: 0.9468 - Accuracy Bg: nan - Accuracy Fallo cohesivo: 0.9826 - Accuracy Fallo malla: 0.7246 - Accuracy Fallo adhesivo: 0.9679 - Accuracy Fallo burbuja: 0.3533 - Iou Bg: 0.0 - Iou Fallo cohesivo: 0.9368 - Iou Fallo malla: 0.6678 - Iou Fallo adhesivo: 0.9310 - Iou Fallo burbuja: 0.3263 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 1337 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: polynomial - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Bg | Accuracy Fallo cohesivo | Accuracy Fallo malla | Accuracy Fallo adhesivo | Accuracy Fallo burbuja | Iou Bg | Iou Fallo cohesivo | Iou Fallo malla | Iou Fallo adhesivo | Iou Fallo burbuja | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------------:|:----------------:|:-----------:|:-----------------------:|:--------------------:|:-----------------------:|:----------------------:|:------:|:------------------:|:---------------:|:------------------:|:-----------------:| | 0.1902 | 1.0 | 219 | 0.2615 | 0.5072 | 0.7247 | 0.9038 | nan | 0.9203 | 0.7991 | 0.9401 | 0.2393 | 0.0 | 0.8853 | 0.5556 | 0.9006 | 0.1944 | | 0.1367 | 2.0 | 438 | 0.2067 | 0.5492 | 0.7602 | 0.9293 | nan | 0.9599 | 0.7487 | 0.9317 | 0.4004 | 0.0 | 0.9160 | 0.6120 | 0.8988 | 0.3195 | | 0.1066 | 3.0 | 657 | 0.1963 | 0.5659 | 0.7814 | 0.9313 | nan | 0.9520 | 0.8022 | 0.9545 | 0.4169 | 0.0 | 0.9175 | 0.6270 | 0.9267 | 0.3584 | | 0.1102 | 4.0 | 876 | 0.1595 | 0.5782 | 0.7756 | 0.9444 | nan | 0.9727 | 0.7669 | 0.9693 | 0.3934 | 0.0 | 0.9336 | 0.6828 | 0.9326 | 0.3422 | | 0.1114 | 5.0 | 1095 | 0.1678 | 0.5772 | 0.7950 | 0.9378 | nan | 0.9619 | 0.7778 | 0.9756 | 0.4648 | 0.0 | 0.9255 | 0.6534 | 0.9151 | 0.3922 | | 0.0897 | 6.0 | 1314 | 0.1726 | 0.5811 | 0.7976 | 0.9420 | nan | 0.9701 | 0.7598 | 0.9723 | 0.4881 | 0.0 | 0.9307 | 0.6613 | 0.9170 | 0.3965 | | 0.0788 | 7.0 | 1533 | 0.2096 | 0.5491 | 0.7253 | 0.9342 | nan | 0.9898 | 0.5936 | 0.9381 | 0.3797 | 0.0 | 0.9235 | 0.5698 | 0.9149 | 0.3374 | | 0.0788 | 8.0 | 1752 | 0.1574 | 0.5774 | 0.7733 | 0.9465 | nan | 0.9726 | 0.7858 | 0.9675 | 0.3673 | 0.0 | 0.9359 | 0.6914 | 0.9264 | 0.3331 | | 0.0855 | 9.0 | 1971 | 0.1970 | 0.5406 | 0.7141 | 0.9380 | nan | 0.9866 | 0.6305 | 0.9708 | 0.2687 | 0.0 | 0.9274 | 0.5984 | 0.9224 | 0.2548 | | 0.0761 | 10.0 | 2190 | 0.1903 | 0.5564 | 0.7479 | 0.9382 | nan | 0.9746 | 0.7050 | 0.9737 | 0.3383 | 0.0 | 0.9268 | 0.6272 | 0.9182 | 0.3098 | | 0.0686 | 11.0 | 2409 | 0.1910 | 0.5562 | 0.7435 | 0.9393 | nan | 0.9827 | 0.6605 | 0.9738 | 0.3572 | 0.0 | 0.9285 | 0.6156 | 0.9209 | 0.3160 | | 0.062 | 12.0 | 2628 | 0.2038 | 0.5453 | 0.7399 | 0.9334 | nan | 0.9728 | 0.6739 | 0.9811 | 0.3317 | 0.0 | 0.9214 | 0.6013 | 0.9035 | 0.3001 | | 0.0586 | 13.0 | 2847 | 0.1914 | 0.5471 | 0.7342 | 0.9402 | nan | 0.9758 | 0.7103 | 0.9814 | 0.2693 | 0.0 | 0.9290 | 0.6397 | 0.9150 | 0.2517 | | 0.0531 | 14.0 | 3066 | 0.1747 | 0.5716 | 0.7689 | 0.9449 | nan | 0.9701 | 0.7945 | 0.9588 | 0.3522 | 0.0 | 0.9339 | 0.6815 | 0.9280 | 0.3147 | | 0.0522 | 15.0 | 3285 | 0.1933 | 0.5591 | 0.7399 | 0.9454 | nan | 0.9810 | 0.7222 | 0.9744 | 0.2820 | 0.0 | 0.9351 | 0.6603 | 0.9355 | 0.2645 | | 0.059 | 16.0 | 3504 | 0.1897 | 0.5691 | 0.7878 | 0.9384 | nan | 0.9499 | 0.8594 | 0.9809 | 0.3608 | 0.0 | 0.9252 | 0.6741 | 0.9159 | 0.3303 | | 0.0503 | 17.0 | 3723 | 0.1895 | 0.5652 | 0.7795 | 0.9365 | nan | 0.9588 | 0.7866 | 0.9808 | 0.3917 | 0.0 | 0.9238 | 0.6508 | 0.9004 | 0.3511 | | 0.0518 | 18.0 | 3942 | 0.2131 | 0.5533 | 0.7332 | 0.9402 | nan | 0.9807 | 0.6877 | 0.9645 | 0.2998 | 0.0 | 0.9294 | 0.6248 | 0.9334 | 0.2790 | | 0.0439 | 19.0 | 4161 | 0.2168 | 0.5565 | 0.7411 | 0.9388 | nan | 0.9801 | 0.6828 | 0.9567 | 0.3448 | 0.0 | 0.9278 | 0.6194 | 0.9234 | 0.3121 | | 0.0459 | 20.0 | 4380 | 0.2688 | 0.5266 | 0.7127 | 0.9266 | nan | 0.9824 | 0.5567 | 0.9841 | 0.3277 | 0.0 | 0.9149 | 0.5329 | 0.8866 | 0.2987 | | 0.043 | 21.0 | 4599 | 0.2395 | 0.5542 | 0.7409 | 0.9369 | nan | 0.9821 | 0.6444 | 0.9745 | 0.3625 | 0.0 | 0.9258 | 0.5974 | 0.9228 | 0.3248 | | 0.0436 | 22.0 | 4818 | 0.1790 | 0.5736 | 0.7750 | 0.9441 | nan | 0.9706 | 0.7783 | 0.9694 | 0.3819 | 0.0 | 0.9331 | 0.6772 | 0.9143 | 0.3433 | | 0.0443 | 23.0 | 5037 | 0.1843 | 0.5683 | 0.7613 | 0.9442 | nan | 0.9756 | 0.7470 | 0.9716 | 0.3511 | 0.0 | 0.9335 | 0.6684 | 0.9177 | 0.3219 | | 0.0402 | 24.0 | 5256 | 0.2048 | 0.5666 | 0.7535 | 0.9429 | nan | 0.9800 | 0.7089 | 0.9706 | 0.3544 | 0.0 | 0.9324 | 0.6457 | 0.9302 | 0.3246 | | 0.0399 | 25.0 | 5475 | 0.2102 | 0.5651 | 0.7524 | 0.9430 | nan | 0.9830 | 0.6875 | 0.9754 | 0.3637 | 0.0 | 0.9327 | 0.6412 | 0.9231 | 0.3287 | | 0.0404 | 26.0 | 5694 | 0.1993 | 0.5792 | 0.7815 | 0.9460 | nan | 0.9690 | 0.8035 | 0.9697 | 0.3837 | 0.0 | 0.9351 | 0.6876 | 0.9289 | 0.3443 | | 0.0388 | 27.0 | 5913 | 0.2024 | 0.5681 | 0.7501 | 0.9470 | nan | 0.9821 | 0.7343 | 0.9605 | 0.3236 | 0.0 | 0.9370 | 0.6715 | 0.9322 | 0.3001 | | 0.0369 | 28.0 | 6132 | 0.1830 | 0.5701 | 0.7553 | 0.9481 | nan | 0.9779 | 0.7698 | 0.9608 | 0.3126 | 0.0 | 0.9379 | 0.6871 | 0.9323 | 0.2931 | | 0.0373 | 29.0 | 6351 | 0.2162 | 0.5682 | 0.7535 | 0.9438 | nan | 0.9828 | 0.7011 | 0.9639 | 0.3665 | 0.0 | 0.9335 | 0.6482 | 0.9239 | 0.3352 | | 0.0348 | 30.0 | 6570 | 0.2126 | 0.5640 | 0.7479 | 0.9435 | nan | 0.9813 | 0.7097 | 0.9623 | 0.3384 | 0.0 | 0.9330 | 0.6537 | 0.9197 | 0.3135 | | 0.0354 | 31.0 | 6789 | 0.2025 | 0.5626 | 0.7467 | 0.9469 | nan | 0.9795 | 0.7453 | 0.9725 | 0.2896 | 0.0 | 0.9368 | 0.6762 | 0.9285 | 0.2716 | | 0.0344 | 32.0 | 7008 | 0.1973 | 0.5786 | 0.7739 | 0.9469 | nan | 0.9734 | 0.7828 | 0.9698 | 0.3695 | 0.0 | 0.9364 | 0.6853 | 0.9326 | 0.3389 | | 0.0333 | 33.0 | 7227 | 0.2199 | 0.5722 | 0.7624 | 0.9438 | nan | 0.9817 | 0.7045 | 0.9696 | 0.3940 | 0.0 | 0.9334 | 0.6481 | 0.9287 | 0.3510 | | 0.0345 | 34.0 | 7446 | 0.2052 | 0.5791 | 0.7724 | 0.9465 | nan | 0.9799 | 0.7347 | 0.9736 | 0.4015 | 0.0 | 0.9363 | 0.6698 | 0.9311 | 0.3582 | | 0.0326 | 35.0 | 7665 | 0.2176 | 0.5758 | 0.7629 | 0.9462 | nan | 0.9835 | 0.7124 | 0.9689 | 0.3868 | 0.0 | 0.9362 | 0.6595 | 0.9345 | 0.3490 | | 0.034 | 36.0 | 7884 | 0.2247 | 0.5717 | 0.7557 | 0.9453 | nan | 0.9841 | 0.7033 | 0.9661 | 0.3694 | 0.0 | 0.9352 | 0.6533 | 0.9331 | 0.3369 | | 0.0324 | 37.0 | 8103 | 0.1957 | 0.5797 | 0.7736 | 0.9490 | nan | 0.9763 | 0.7801 | 0.9725 | 0.3657 | 0.0 | 0.9390 | 0.6963 | 0.9299 | 0.3333 | | 0.0332 | 38.0 | 8322 | 0.1996 | 0.5770 | 0.7644 | 0.9478 | nan | 0.9826 | 0.7310 | 0.9696 | 0.3743 | 0.0 | 0.9379 | 0.6741 | 0.9336 | 0.3393 | | 0.0332 | 39.0 | 8541 | 0.2129 | 0.5638 | 0.7423 | 0.9449 | nan | 0.9845 | 0.7021 | 0.9616 | 0.3212 | 0.0 | 0.9348 | 0.6514 | 0.9328 | 0.3001 | | 0.03 | 40.0 | 8760 | 0.2283 | 0.5694 | 0.7539 | 0.9441 | nan | 0.9840 | 0.6931 | 0.9686 | 0.3699 | 0.0 | 0.9339 | 0.6464 | 0.9277 | 0.3387 | | 0.0319 | 41.0 | 8979 | 0.2013 | 0.5741 | 0.7624 | 0.9471 | nan | 0.9804 | 0.7416 | 0.9670 | 0.3606 | 0.0 | 0.9370 | 0.6760 | 0.9277 | 0.3300 | | 0.0361 | 42.0 | 9198 | 0.2094 | 0.5709 | 0.7568 | 0.9463 | nan | 0.9810 | 0.7317 | 0.9663 | 0.3483 | 0.0 | 0.9362 | 0.6689 | 0.9279 | 0.3216 | | 0.0304 | 43.0 | 9417 | 0.2098 | 0.5731 | 0.7586 | 0.9468 | nan | 0.9821 | 0.7282 | 0.9666 | 0.3575 | 0.0 | 0.9368 | 0.6700 | 0.9295 | 0.3293 | | 0.0303 | 44.0 | 9636 | 0.2155 | 0.5705 | 0.7554 | 0.9470 | nan | 0.9814 | 0.7329 | 0.9702 | 0.3370 | 0.0 | 0.9369 | 0.6718 | 0.9301 | 0.3137 | | 0.03 | 45.0 | 9855 | 0.2183 | 0.5703 | 0.7541 | 0.9464 | nan | 0.9825 | 0.7229 | 0.9677 | 0.3435 | 0.0 | 0.9364 | 0.6657 | 0.9311 | 0.3181 | | 0.0301 | 45.66 | 10000 | 0.2142 | 0.5724 | 0.7571 | 0.9468 | nan | 0.9826 | 0.7246 | 0.9679 | 0.3533 | 0.0 | 0.9368 | 0.6678 | 0.9310 | 0.3263 | ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 2.0.1+cpu - Datasets 2.13.1 - Tokenizers 0.13.3
rhplus0831/maid-yuzu-v2
rhplus0831
2024-02-01T02:08:39Z
4
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "mergekit", "merge", "base_model:smelborp/MixtralOrochi8x7B", "base_model:merge:smelborp/MixtralOrochi8x7B", "base_model:ycros/BagelMIsteryTour-v2-8x7B", "base_model:merge:ycros/BagelMIsteryTour-v2-8x7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-01T01:41:19Z
--- base_model: - smelborp/MixtralOrochi8x7B - ycros/BagelMIsteryTour-v2-8x7B tags: - mergekit - merge --- # maid-yuzu-v2 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). This model is an experiment to combine two models I liked. ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [smelborp/MixtralOrochi8x7B](https://huggingface.co/smelborp/MixtralOrochi8x7B) * [ycros/BagelMIsteryTour-v2-8x7B](https://huggingface.co/ycros/BagelMIsteryTour-v2-8x7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: model: path: smelborp/MixtralOrochi8x7B dtype: bfloat16 merge_method: slerp parameters: t: - value: 0.25 slices: - sources: - layer_range: [0, 32] model: model: path: smelborp/MixtralOrochi8x7B - layer_range: [0, 32] model: model: path: ycros/BagelMIsteryTour-v2-8x7B ```
fuyu-quant/ibl-regression-ver4-branch
fuyu-quant
2024-02-01T01:59:41Z
1
0
peft
[ "peft", "region:us" ]
null
2024-01-31T16:08:50Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0
judith0/v1-ineClassification
judith0
2024-02-01T01:52:06Z
196
0
transformers
[ "transformers", "tensorboard", "safetensors", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swin-tiny-patch4-window7-224", "base_model:finetune:microsoft/swin-tiny-patch4-window7-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-02-01T01:50:32Z
--- license: apache-2.0 base_model: microsoft/swin-tiny-patch4-window7-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-finetuned-eurosat results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 1.0 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1430 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.89 | 6 | 0.5325 | 0.8936 | | 0.833 | 1.93 | 13 | 0.1850 | 0.9787 | | 0.833 | 2.67 | 18 | 0.1430 | 1.0 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
Castling/marian-finetuned-kde4-en-to-fr
Castling
2024-02-01T01:44:06Z
125
0
transformers
[ "transformers", "tensorboard", "safetensors", "marian", "text2text-generation", "translation", "generated_from_trainer", "dataset:kde4", "base_model:Helsinki-NLP/opus-mt-en-fr", "base_model:finetune:Helsinki-NLP/opus-mt-en-fr", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-12-29T00:54:45Z
--- license: apache-2.0 base_model: Helsinki-NLP/opus-mt-en-fr tags: - translation - generated_from_trainer datasets: - kde4 metrics: - bleu model-index: - name: marian-finetuned-kde4-en-to-fr results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: kde4 type: kde4 config: en-fr split: train args: en-fr metrics: - name: Bleu type: bleu value: 52.837727401681214 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 0.8556 - Bleu: 52.8377 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
BachNgoH/a2c-PandaPushDense-v2
BachNgoH
2024-02-01T01:36:37Z
2
0
stable-baselines3
[ "stable-baselines3", "PandaPushDense-v2", "deep-reinforcement-learning", "reinforcement-learning", "arxiv:2106.13687", "model-index", "region:us" ]
reinforcement-learning
2023-02-07T14:53:03Z
--- library_name: stable-baselines3 tags: - PandaPushDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaPushDense-v2 type: PandaPushDense-v2 metrics: - type: mean_reward value: -7.94 +/- 4.62 name: mean_reward verified: false --- # **A2C** Agent playing **PandaPushDense-v2** This is a trained model of a **A2C** agent playing **PandaPushDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ``` Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687)
r3m3c3/english-to-kanji-c20000_2
r3m3c3
2024-02-01T01:35:37Z
29
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-02-01T01:34:20Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿงจ diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
SurfaceData/llava-v1.6-vicuna-7b-processor
SurfaceData
2024-02-01T01:33:22Z
0
1
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-01T01:33:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jbottarini/rl_workshop_1
jbottarini
2024-02-01T01:16:15Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-02-01T01:14:37Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: rl_workshop_1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 13.90 +/- 10.89 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
paisanx/Reinforce-Pixelcopter-PLE-v7
paisanx
2024-02-01T01:15:38Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-02-01T01:15:31Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v7 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: -5.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
nanalysenko/panacea-test-2
nanalysenko
2024-02-01T01:02:16Z
44
0
sentence-transformers
[ "sentence-transformers", "safetensors", "roberta", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-02-01T00:56:03Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 75 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 75, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
asun17904/anliR1-t5-base-kd
asun17904
2024-02-01T00:35:01Z
0
0
pytorch
[ "pytorch", "en", "license:mit", "region:us" ]
null
2024-01-31T05:17:44Z
--- language: en license: mit library_name: pytorch --- # Knowledge Continuity Regularized Network Dataset: ANLI Round: None Trainer Hyperparameters: - `lr` = 5e-05 - `per_device_batch_size` = 8 - `gradient_accumulation_steps` = 2 - `weight_decay` = 0.0 - `seed` = 42 Regularization Hyperparameters - `numerical stability denominator constant` = 0.01 - `lambda` = 0.0001 - `alpha` = 2.0 - `beta` = 2.0 Extended Logs: |eval_loss|eval_accuracy|epoch| |--|--|--| |34.239|0.403|1.0| |35.481|0.395|2.0| |35.944|0.407|3.0| |35.125|0.438|4.0| |34.990|0.444|5.0| |35.375|0.425|6.0| |34.941|0.449|7.0| |34.913|0.447|8.0| |34.559|0.461|9.0| |34.499|0.464|10.0| |34.479|0.462|11.0| |34.368|0.461|12.0| |34.369|0.464|13.0| |34.496|0.466|14.0| |34.358|0.468|15.0| |34.275|0.469|16.0| |34.063|0.477|17.0| |33.969|0.483|18.0| |33.991|0.478|19.0| **Test Accuracy: 0.482**
Seher99/mistral-7b-dollyy
Seher99
2024-02-01T00:31:52Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-01-30T00:29:00Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
XiaQiang/q-FrozenLake-v1-4x4-noSlippery
XiaQiang
2024-02-01T00:24:33Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-02-01T00:24:31Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="XiaQiang/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
asun17904/glue-rte-bert-base-uncased
asun17904
2024-02-01T00:20:09Z
0
0
pytorch
[ "pytorch", "en", "license:mit", "region:us" ]
null
2024-02-01T00:12:15Z
--- language: en license: mit library_name: pytorch --- # Plainly Optimized Network Dataset: GLUE Trainer Hyperparameters: - `lr` = 5e-05 - `per_device_batch_size` = 32 - `gradient_accumulation_steps` = 1 - `weight_decay` = 0.0 - `seed` = 42 |eval_loss|eval_accuracy|epoch| |--|--|--| |21.354|0.531|1.0| |20.627|0.621|2.0| |20.929|0.610|3.0| |21.703|0.585|4.0| |21.829|0.581|5.0| |21.040|0.635|6.0| |21.964|0.585|7.0| |21.910|0.581|8.0| |22.344|0.581|9.0|
AIFT/AIFT-instruct-v1.3-42dot_LLM-SFT-1.3B
AIFT
2024-02-01T00:19:21Z
150
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-01T00:17:32Z
--- license: cc-by-sa-4.0 --- <h1>AIFT-instruct-42dot_LLM-SFT-1.3B</h1> <b><ํ•™์Šต ๋ฐ์ดํ„ฐ ๊ตฌ์ถ•></b> <br> kyujinpy ๋‹˜์ด ๊ณต๊ฐœํ•˜์‹  KOR-OpenOrca-Platypus ๋ฐ์ดํ„ฐ๋ฅผ ์ผ๋ถ€ ์‚ญ์ œ(์ƒ˜ํ”Œ๋ง) ๋ฐ ์ •์ œ ์ž‘์—… ์ง„ํ–‰ํ•˜์—ฌ ํ™œ์šฉ. ๊ทธ ์ดํ›„ ํ•ด๋‹น ๋ฐ์ดํ„ฐ๋“ค์„ ๋ณด๋ฉฐ ๊ด€๋ จ ํƒœ์Šคํฌ๋ฅผ ์ถ”์ถœํ•˜์˜€๊ณ  ์ด๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•ด๋‹น ํƒœ์Šคํฌ์— ๋งž์ถฐ์„œ NLP ๊ด€๋ จ ์˜คํ”ˆ์†Œ์Šค ๋ฐ์ดํ„ฐ๋ฅผ ํ™œ์šฉํ•˜์—ฌ ํ•™์Šต๋ฐ์ดํ„ฐ๋ฅผ ์ž์ฒด์ ์œผ๋กœ ์—ญ์‚ฌ, ๊ณผํ•™, ์ˆ˜ํ•™, ๊ธฐ๊ณ„๋…ํ•ด, ๋ฆฌ๋ทฐ ๋ถ„์„ ๋ฌธ์ œ๋ฅผ gpt๋ฅผ ํ†ตํ•ด์„œ ๊ตฌ์ถ•ํ•˜์˜€๊ณ , aihub ์ผ๋ฐ˜์ƒ์‹ ๋ฐ ๊ธฐ๊ณ„๋…ํ•ด ๋ฐ์ดํ„ฐ๋ฅผ ํ™œ์šฉํ•˜์—ฌ ์ถ”๊ฐ€๋กœ ํ•™์Šต ๋ฐ์ดํ„ฐ๋ฅผ ๊ตฌ์ถ•(ํ˜•ํƒœ์†Œ ๊ด€๋ จ, ๊ธฐ๊ณ„๋…ํ•ด ๊ด€๋ จ ๋ฐ ์š”์•ฝ) ๊ฐ์ข… ๋ธ”๋กœ๊ทธ์—์„œ ์—ญ์‚ฌ ๋ฐ ์ƒ์‹ ํ€ด์ฆˆ๋ฅผ ์‚ฌ๋žŒ์ด ์ง์ ‘ ํ•™์Šต๋ฐ์ดํ„ฐ ํ˜•ํƒœ๋กœ ๋ณ€๊ฒฝ AI2AI Challenge ๋ฐ์ดํ„ฐ ํ˜•ํƒœ๋ฅผ ๋ณด๊ณ  gpt๋ฅผ ํ†ตํ•ด ์ดˆ๋“ฑ ์ˆ˜์ค€์˜ ๊ณผํ•™ ์ˆ˜ํ•™ ๋ฌธ์ œ ์œ ํ˜•์„ ์ œ์ž‘ 500๋ฌธ์ œ ์˜์–ด ๋ฒˆ์—ญ ๋ฐ์ดํ„ฐ ์˜ํ•œ/ํ•œ์˜ ๋ฐ์ดํ„ฐ ํ•™์Šต ๋ฐ์ดํ„ฐ๋กœ ํ™œ์šฉ ์ง„ํ–‰ ์ด ๋ฐ์ดํ„ฐ 4๋งŒ๊ฐœ ์ •๋„ ์‚ฌ์šฉํ•˜์˜€์Šต๋‹ˆ๋‹ค. <br> <br> + TruthfulQA ๊ด€๋ จ ๋ฌธ์ œ ์ถ”๊ฐ€๋ฅผ ์ง„ํ–‰ํ•˜์˜€์Šต๋‹ˆ๋‹ค.(์†์„ค ๊ด€๋ จ ์ฐธ๊ฑฐ์ง“ ๋ฌธ์ œ) + ๊ธฐ๊ณ„๋…ํ•ด ๊ด€๋ จ ํ•™์Šต ๋ฐ์ดํ„ฐ๋ฅผ ChatGPT๋ฅผ ํ†ตํ•ด์„œ ๋‹ต๋ณ€์„ ์–ป์–ด ํ•™์Šต + ๋ฌธ๋ฒ•๊ด€๋ จ ํ•™์Šต ๋ฐ์ดํ„ฐ <br> ###ํ•™์Šต ๋ฐ์ดํ„ฐ ํŒŒ์ผ์€ ๋น„๊ณต๊ฐœ์ž…๋‹ˆ๋‹ค. <br> <๋ชจ๋ธ> <br> 42dot์—์„œ ๊ณต๊ฐœํ•œ 42dot_LLM-SFT-1.3B์„ ๋ฒ ์ด์Šค ๋ชจ๋ธ๋กœ ํ•˜์—ฌ ํ•™์Šต ์ง„ํ–‰ํ•˜์˜€์Šต๋‹ˆ๋‹ค. <br> <br> <br> <b><ํ•™์Šต></b> <br> ํ•™์Šต์€ LoRA๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ A100 40G *2์—์„œ ํ•™์Šต์„ ์ง„ํ–‰ํ•˜์˜€์Šต๋‹ˆ๋‹ค.
Zintoulou/finetuningnewmodule1
Zintoulou
2024-02-01T00:14:58Z
1
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:codellama/CodeLlama-7b-Instruct-hf", "base_model:adapter:codellama/CodeLlama-7b-Instruct-hf", "license:llama2", "region:us" ]
null
2024-01-31T23:29:48Z
--- license: llama2 library_name: peft tags: - generated_from_trainer base_model: codellama/CodeLlama-7b-Instruct-hf model-index: - name: finetuningnewmodule1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuningnewmodule1 This model is a fine-tuned version of [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9468 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.688 | 1.0 | 1 | 2.7045 | | 2.2248 | 2.0 | 2 | 2.0812 | | 1.6447 | 3.0 | 3 | 1.7351 | | 1.2926 | 4.0 | 4 | 1.2896 | | 0.8111 | 5.0 | 5 | 1.0243 | | 0.4457 | 6.0 | 6 | 0.9231 | | 0.2562 | 7.0 | 7 | 0.9348 | | 0.1901 | 8.0 | 8 | 0.9468 | ### Framework versions - Transformers 4.36.0 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.1 ## Training procedure ### Framework versions - PEFT 0.6.0
son-of-man/HoloViolet-7B-test3
son-of-man
2024-02-01T00:04:15Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "GreenNode/GreenNode-mini-7B-multilingual-v1olet", "KoboldAI/Mistral-7B-Holodeck-1", "FPHam/Sydney_Pirate_Mistral_7b", "base_model:FPHam/Sydney_Pirate_Mistral_7b", "base_model:merge:FPHam/Sydney_Pirate_Mistral_7b", "base_model:GreenNode/GreenNode-mini-7B-multilingual-v1olet", "base_model:merge:GreenNode/GreenNode-mini-7B-multilingual-v1olet", "base_model:KoboldAI/Mistral-7B-Holodeck-1", "base_model:merge:KoboldAI/Mistral-7B-Holodeck-1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-01T00:00:22Z
--- tags: - merge - mergekit - lazymergekit - GreenNode/GreenNode-mini-7B-multilingual-v1olet - KoboldAI/Mistral-7B-Holodeck-1 - FPHam/Sydney_Pirate_Mistral_7b base_model: - GreenNode/GreenNode-mini-7B-multilingual-v1olet - KoboldAI/Mistral-7B-Holodeck-1 - FPHam/Sydney_Pirate_Mistral_7b --- # HoloViolet-7B-test3 HoloViolet-7B-test3 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [GreenNode/GreenNode-mini-7B-multilingual-v1olet](https://huggingface.co/GreenNode/GreenNode-mini-7B-multilingual-v1olet) * [KoboldAI/Mistral-7B-Holodeck-1](https://huggingface.co/KoboldAI/Mistral-7B-Holodeck-1) * [FPHam/Sydney_Pirate_Mistral_7b](https://huggingface.co/FPHam/Sydney_Pirate_Mistral_7b) ## ๐Ÿงฉ Configuration ```yaml merge_method: task_arithmetic base_model: mistralai/Mistral-7B-v0.1 models: - model: mistralai/Mistral-7B-v0.1 - model: GreenNode/GreenNode-mini-7B-multilingual-v1olet parameters: weight: 0.69 - model: KoboldAI/Mistral-7B-Holodeck-1 parameters: weight: 0.42 - model: FPHam/Sydney_Pirate_Mistral_7b parameters: weight: 0.21 dtype: bfloat16 ``` ## ๐Ÿ’ป Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "son-of-man/HoloViolet-7B-test3" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
r3m3c3/english-to-kanji-c14500_2
r3m3c3
2024-01-31T23:50:26Z
29
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-01-31T23:49:16Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿงจ diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
r3m3c3/english-to-kanji-c8000_2
r3m3c3
2024-01-31T23:47:14Z
29
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-01-31T23:46:06Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿงจ diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
r3m3c3/english-to-kanji-c7000_2
r3m3c3
2024-01-31T23:45:36Z
29
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-01-31T23:44:25Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿงจ diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
rasyosef/gpt2-oscar-amharic-tokenizer
rasyosef
2024-01-31T23:37:13Z
0
0
transformers
[ "transformers", "am", "dataset:oscar", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-01-31T23:05:31Z
--- license: mit datasets: - oscar language: - am library_name: transformers --- # Amharic BPE Tokenizer This repo contains a **Byte-Pair Encoding** tokenizer trained on the **Amharic** subset of the [oscar](https://huggingface.co/datasets/oscar) dataset. It's the same as the GPT-2 tokenizer but trained from scratch on an amharic dataset with a vocabulary size of `24000`. # How to use You can load the tokenizer from huggingface hub as follows. ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("rasyosef/gpt2-oscar-amharic-tokenizer") tokenizer("แŠ แ‰ฃแ‹ญแŠ• แ‹ซแˆ‹แ‹จ แ‹จแ•แˆŒแŠ• แ‰ฒแŠฌแ‰ต แŠฅแ‰ฝแˆˆแ‹‹แˆˆแ‹แข") ```