Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
sequencelengths
1
1.84k
sha
null
created_at
stringlengths
25
25
null
null
{}
bobbyw/mt5-small-finetuned-cnn
null
[ "region:us" ]
null
2024-04-30T20:35:03+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-160m_mz-135_WordLength_n-its-10-seed-2 This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-160m", "model-index": [{"name": "robust_llm_pythia-160m_mz-135_WordLength_n-its-10-seed-2", "results": []}]}
AlignmentResearch/robust_llm_pythia-160m_mz-135_WordLength_n-its-10-seed-2
null
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-160m", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T20:35:18+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-160m_mz-135_WordLength_n-its-10-seed-1 This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 1 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-160m", "model-index": [{"name": "robust_llm_pythia-160m_mz-135_WordLength_n-its-10-seed-1", "results": []}]}
AlignmentResearch/robust_llm_pythia-160m_mz-135_WordLength_n-its-10-seed-1
null
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-160m", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T20:36:11+00:00
null
null
{}
MSankara/tinyllama-ner
null
[ "safetensors", "region:us" ]
null
2024-04-30T20:36:24+00:00
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "base_model": "mistralai/Mistral-7B-Instruct-v0.2"}
vaarrun009/Rzolut_Mistral_half
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "region:us" ]
null
2024-04-30T20:36:33+00:00
feature-extraction
transformers
# fine-tuned/medical-10-10-1-jinaai_jina-embeddings-v2-small-en-50-gpt-3.5-turbo-01_8647177611 ## Model Description fine-tuned/medical-10-10-1-jinaai_jina-embeddings-v2-small-en-50-gpt-3.5-turbo-01_8647177611 is a fine-tuned version of jinaai/jina-embeddings-v2-small-en designed for a specific domain. ## Use Case This model is designed to support various applications in natural language processing and understanding. ## Associated Dataset This the dataset for this model can be found [**here**](https://huggingface.co/datasets/fine-tuned/fine-tuned/medical-10-10-1-jinaai_jina-embeddings-v2-small-en-50-gpt-3.5-turbo-01_8647177611). ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from transformers import AutoModel, AutoTokenizer llm_name = "fine-tuned/medical-10-10-1-jinaai_jina-embeddings-v2-small-en-50-gpt-3.5-turbo-01_8647177611" tokenizer = AutoTokenizer.from_pretrained(llm_name) model = AutoModel.from_pretrained(llm_name, trust_remote_code=True) tokens = tokenizer("Your text here", return_tensors="pt") embedding = model(**tokens) ```
{}
fine-tuned/medical-10-10-1-jinaai_jina-embeddings-v2-small-en-50-gpt-3.5-turbo-01_8647177611
null
[ "transformers", "safetensors", "bert", "feature-extraction", "custom_code", "region:us" ]
null
2024-04-30T20:38:18+00:00
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # DreamBooth - yuffish/blackchair-segmented This is a dreambooth model derived from stabilityai/stable-diffusion-2-1-base. The weights were trained on a photo of sks object using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers"], "inference": true, "base_model": "stabilityai/stable-diffusion-2-1-base", "instance_prompt": "a photo of sks object"}
yuffish/blackchair-segmented
null
[ "diffusers", "tensorboard", "safetensors", "text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "base_model:stabilityai/stable-diffusion-2-1-base", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
null
2024-04-30T20:39:46+00:00
null
null
{"license": "llama3"}
epadcece/llama4
null
[ "license:llama3", "region:us" ]
null
2024-04-30T20:40:40+00:00
null
transformers
## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------|---------|--------|----------| | **Llama 3 (8B)** | [▶️ Start for free](https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing) | 2x faster | 60% less | | **Mistral (7B)** | [▶️ Start for free](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 73% less | | **Gemma (7B)** | [▶️ Start for free](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing) | 2.4x faster | 71% less | | **ORPO** | [▶️ Start for free](https://colab.research.google.com/drive/11t4njE3c4Lxl-07OD8lJSMKkfyJml3Tn?usp=sharing) | 1.9x faster | 43% less | | **DPO Zephyr** | [▶️ Start for free](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 43% less | | **Phi-3 (3.8B)** | [▶️ Start for free](https://colab.research.google.com/drive/1NvkBmkHfucGO3Ve9s1NKZvMNlw5p83ym?usp=sharing) | 2x faster | 50% less | | **TinyLlama** | [▶️ Start for free](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less |
{"license": "apache-2.0"}
kevin-hu-lab/finetuning
null
[ "transformers", "gguf", "llama", "license:apache-2.0", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T20:42:00+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Base Ko - Dearlie This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the Noise Data dataset. It achieves the following results on the evaluation set: - Loss: 5.4914 - Cer: 81.4735 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - training_steps: 500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0005 | 500.0 | 500 | 5.4914 | 81.4735 | ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 2.3.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"language": ["ko"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["AIHub/noise"], "base_model": "openai/whisper-base", "model-index": [{"name": "Whisper Base Ko - Dearlie", "results": []}]}
Dearlie/whisper-base
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "ko", "dataset:AIHub/noise", "base_model:openai/whisper-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-30T20:42:17+00:00
null
null
{}
Juggernaut259/Alabaster.blend
null
[ "region:us" ]
null
2024-04-30T20:42:45+00:00
null
null
{}
bobbyw/bart-large-cnn-finetuned-cnn
null
[ "region:us" ]
null
2024-04-30T20:43:47+00:00
reinforcement-learning
stable-baselines3
# **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
{"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "257.62 +/- 17.43", "name": "mean_reward", "verified": false}]}]}]}
rodeoFlip/ppo-LunarLander-v2
null
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-30T20:45:31+00:00
null
null
{}
gabybaldeon/dqn-SpaceInvadersNoFrameskip-v4
null
[ "region:us" ]
null
2024-04-30T20:46:25+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
shallow6414/1q31l6l
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T20:48:10+00:00
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Qwen1.5-0.5B-Chat - bnb 4bits - Model creator: https://huggingface.co/Qwen/ - Original model: https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat/ Original model description: --- license: other license_name: tongyi-qianwen-research license_link: >- https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat/blob/main/LICENSE language: - en pipeline_tag: text-generation tags: - chat --- # Qwen1.5-0.5B-Chat ## Introduction Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include: * 8 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B and 72B dense models, and an MoE model of 14B with 2.7B activated; * Significant performance improvement in human preference for chat models; * Multilingual support of both base and chat models; * Stable support of 32K context length for models of all sizes * No need of `trust_remote_code`. For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5). <br> ## Model Details Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA (except for 32B) and the mixture of SWA and full attention. ## Training details We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization. ## Requirements The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error: ``` KeyError: 'qwen2' ``` ## Quickstart Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen1.5-0.5B-Chat", torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-0.5B-Chat") prompt = "Give me a short introduction to large language model." messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` For quantized models, we advise you to use the GPTQ, AWQ, and GGUF correspondents, namely `Qwen1.5-0.5B-Chat-GPTQ-Int4`, `Qwen1.5-0.5B-Chat-GPTQ-Int8`, `Qwen1.5-0.5B-Chat-AWQ`, and `Qwen1.5-0.5B-Chat-GGUF`. ## Tips * If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in `generation_config.json`. ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{qwen, title={Qwen Technical Report}, author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu}, journal={arXiv preprint arXiv:2309.16609}, year={2023} } ```
{}
RichardErkhov/Qwen_-_Qwen1.5-0.5B-Chat-4bits
null
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-30T20:48:22+00:00
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Qwen1.5-0.5B-Chat - bnb 8bits - Model creator: https://huggingface.co/Qwen/ - Original model: https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat/ Original model description: --- license: other license_name: tongyi-qianwen-research license_link: >- https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat/blob/main/LICENSE language: - en pipeline_tag: text-generation tags: - chat --- # Qwen1.5-0.5B-Chat ## Introduction Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include: * 8 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B and 72B dense models, and an MoE model of 14B with 2.7B activated; * Significant performance improvement in human preference for chat models; * Multilingual support of both base and chat models; * Stable support of 32K context length for models of all sizes * No need of `trust_remote_code`. For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5). <br> ## Model Details Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA (except for 32B) and the mixture of SWA and full attention. ## Training details We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization. ## Requirements The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error: ``` KeyError: 'qwen2' ``` ## Quickstart Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen1.5-0.5B-Chat", torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-0.5B-Chat") prompt = "Give me a short introduction to large language model." messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` For quantized models, we advise you to use the GPTQ, AWQ, and GGUF correspondents, namely `Qwen1.5-0.5B-Chat-GPTQ-Int4`, `Qwen1.5-0.5B-Chat-GPTQ-Int8`, `Qwen1.5-0.5B-Chat-AWQ`, and `Qwen1.5-0.5B-Chat-GGUF`. ## Tips * If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in `generation_config.json`. ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{qwen, title={Qwen Technical Report}, author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu}, journal={arXiv preprint arXiv:2309.16609}, year={2023} } ```
{}
RichardErkhov/Qwen_-_Qwen1.5-0.5B-Chat-8bits
null
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-04-30T20:49:36+00:00
text-generation
transformers
# Llama-3-8B-Instruct-GPTQ-4-Bit - Original Model creator: [Meta Llama from Meta](https://huggingface.co/meta-llama) - Original model: [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) - Built with Meta Llama 3 - Quantized by [Astronomer](https://astronomer.io) # Important Note About Serving with vLLM & oobabooga/text-generation-webui - For loading this model onto vLLM, make sure all requests have `"stop_token_ids":[128001, 128009]` to temporarily address the non-stop generation issue. - vLLM does not yet respect `generation_config.json`. - vLLM team is working on a a fix for this https://github.com/vllm-project/vllm/issues/4180 - For oobabooga/text-generation-webui - Load the model via AutoGPTQ, with `no_inject_fused_attention` enabled. This is a bug with AutoGPTQ library. - Under `Parameters` -> `Generation` -> `Skip special tokens`: turn this off (deselect) - Under `Parameters` -> `Generation` -> `Custom stopping strings`: add `"<|end_of_text|>","<|eot_id|>"` to the field <!-- description start --> ## Description This repo contains 4 Bit quantized GPTQ model files for [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct). This model can be loaded with less than 6 GB of VRAM (huge reduction from the original 16.07GB model) and can be served lightning fast with the cheapest Nvidia GPUs possible (Nvidia T4, Nvidia K80, RTX 4070, etc). The 4 bit GPTQ quant has small quality degradation from the original `bfloat16` model but can be served on much smaller GPUs with maximum improvement in latency and throughput. <!-- description end --> ## GPTQ Quantization Method - This model is quantized by utilizing the AutoGPTQ library, following best practices noted by [GPTQ paper](https://arxiv.org/abs/2210.17323) - Quantization is calibrated and aligned with random samples from the specified dataset (wikitext for now) for minimum accuracy loss. | Branch | Bits | Group Size | Act Order | Damp % | GPTQ Dataset | Sequence Length | VRAM Size | ExLlama | Description | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/astronomer-io/Llama-3-8B-Instruct-GPTQ-4-Bit/tree/main) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 5.74 GB | Yes | 4-bit, with Act Order and group size 128g. Smallest model possible with small accuracy loss | | More variants to come | TBD | TBD | TBD | TBD | TBD | TBD | TBD | TBD | May upload additional variants of GPTQ 4 bit models in the future using different parameters such as 128g group size and etc. | ## Serving this GPTQ model using vLLM Tested serving this model via vLLM using an Nvidia T4 (16GB VRAM). Tested with the below command ``` python -m vllm.entrypoints.openai.api_server --model astronomer-io/Llama-3-8B-Instruct-GPTQ-4-Bit --max-model-len 8192 --dtype float16 ``` For the non-stop token generation bug, make sure to send requests with `stop_token_ids":[128001, 128009]` to vLLM endpoint Example: ```json { "model": "astronomer-io/Llama-3-8B-Instruct-GPTQ-4-Bit", "messages": [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Who created Llama 3?"} ], "max_tokens": 2000, "stop_token_ids":[128001,128009] } ``` ### Prompt Template ``` <|begin_of_text|><|start_header_id|>user<|end_header_id|> {{prompt}}<|eot_id|> <|start_header_id|>assistant<|end_header_id|> ``` ### Contributors - Quantized by [David Xue, Machine Learning Engineer from Astronomer](https://www.linkedin.com/in/david-xue-uva/)
{"license": "other", "tags": ["llama", "llama-3", "facebook", "meta", "astronomer", "gptq", "pretrained", "quantized", "finetuned", "autotrain_compatible", "endpoints_compatible"], "datasets": ["wikitext"], "model_name": "Meta-Llama-3-8B-Instruct", "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "inference": false, "model_creator": "astronomer-io", "model_type": "llama", "pipeline_tag": "text-generation", "prompt_template": "{% set loop_messages = messages %}{% for message in loop_messages %}{% set content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n'+ message['content'] | trim + '<|eot_id|>' %}{% if loop.index0 == 0 %}{% set content = bos_token + content %}{% endif %}{{ content }}{% endfor %}{% if add_generation_prompt %}{{ '<|start_header_id|>assistant<|end_header_id|>\n\n' }}{% endif %}", "quantized_by": "davidxmle", "license_name": "llama-3-community-license", "license_link": "https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/blob/main/LICENSE"}
davidxmle/Llama-3-8B-Instruct-GPTQ-4-Bit-Debug
null
[ "transformers", "llama", "text-generation", "llama-3", "facebook", "meta", "astronomer", "gptq", "pretrained", "quantized", "finetuned", "autotrain_compatible", "endpoints_compatible", "conversational", "dataset:wikitext", "arxiv:2210.17323", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "license:other", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-30T20:49:56+00:00
null
null
{}
Aleksrrrrr/Miner
null
[ "region:us" ]
null
2024-04-30T20:50:40+00:00
null
null
{}
lotusfine/lotusbot
null
[ "region:us" ]
null
2024-04-30T20:51:07+00:00
null
null
{}
BohdanPetryshyn/codellama-7b-openapi-completion-tmp
null
[ "region:us" ]
null
2024-04-30T20:52:08+00:00
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - embracellm/sushi07_LoRA <Gallery /> ## Model description These are embracellm/sushi07_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of sushi to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](embracellm/sushi07_LoRA/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "openrail++", "library_name": "diffusers", "tags": ["text-to-image", "text-to-image", "diffusers-training", "diffusers", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "a photo of sushi", "widget": []}
embracellm/sushi07_LoRA
null
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
null
2024-04-30T20:53:07+00:00
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # LoRA text2image fine-tuning - manusehgal/sdxl14finetuningnew These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the YaYaB/onepiece-blip-captions dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers", "diffusers-training", "lora"], "base_model": "runwayml/stable-diffusion-v1-5", "inference": true}
manusehgal/sdxl14finetuningnew
null
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers-training", "lora", "base_model:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
null
2024-04-30T21:00:22+00:00
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Qwen1.5-14B-Chat - bnb 8bits - Model creator: https://huggingface.co/Qwen/ - Original model: https://huggingface.co/Qwen/Qwen1.5-14B-Chat/ Original model description: --- license: other license_name: tongyi-qianwen license_link: >- https://huggingface.co/Qwen/Qwen1.5-14B-Chat/blob/main/LICENSE language: - en pipeline_tag: text-generation tags: - chat --- # Qwen1.5-14B-Chat ## Introduction Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include: * 8 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B and 72B dense models, and an MoE model of 14B with 2.7B activated; * Significant performance improvement in human preference for chat models; * Multilingual support of both base and chat models; * Stable support of 32K context length for models of all sizes * No need of `trust_remote_code`. For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5). <br> ## Model Details Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA (except for 32B) and the mixture of SWA and full attention. ## Training details We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization. ## Requirements The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error: ``` KeyError: 'qwen2' ``` ## Quickstart Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen1.5-14B-Chat", torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-14B-Chat") prompt = "Give me a short introduction to large language model." messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` For quantized models, we advise you to use the GPTQ, AWQ, and GGUF correspondents, namely `Qwen1.5-14B-Chat-GPTQ-Int4`, `Qwen1.5-14B-Chat-GPTQ-Int8`, `Qwen1.5-14B-Chat-AWQ`, and `Qwen1.5-14B-Chat-GGUF`. ## Tips * If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in `generation_config.json`. ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{qwen, title={Qwen Technical Report}, author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu}, journal={arXiv preprint arXiv:2309.16609}, year={2023} } ```
{}
RichardErkhov/Qwen_-_Qwen1.5-14B-Chat-8bits
null
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-04-30T21:01:16+00:00
null
flair
{"language": ["en"], "license": "cc", "library_name": "flair", "datasets": ["HuggingFaceFW/fineweb"], "metrics": ["character"]}
MatMat626/GoldenGlare22
null
[ "flair", "en", "dataset:HuggingFaceFW/fineweb", "license:cc", "region:us" ]
null
2024-04-30T21:01:35+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
samzirbo/mT5.tokenizer.en-es_16K
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-30T21:01:36+00:00
null
null
{}
DivyaMereddy007/Add_NewLayer_Finetuning_V1_TrainSetenceTransforme_Cpyfrmv5finetuneEPOC5
null
[ "region:us" ]
null
2024-04-30T21:02:43+00:00
null
null
{}
squaadinc/1714510960701x454458478288175100
null
[ "region:us" ]
null
2024-04-30T21:02:45+00:00
image-text-to-text
xtuner
# mlx-community/llava-llama-3-8b-v1_1-4bit This model was converted to MLX format from [`xtuner/llava-llama-3-8b-v1_1-transformers`]() using mlx-vllm version **0.0.3**. Refer to the [original model card](https://huggingface.co/xtuner/llava-llama-3-8b-v1_1-transformers) for more details on the model. ## Use with mlx ```bash pip install -U mlx-vlm ``` ```bash python -m mlx_vlm.generate --model mlx-community/llava-llama-3-8b-v1_1-4bit --max-tokens 100 --temp 0.0 ```
{"library_name": "xtuner", "tags": ["mlx"], "datasets": ["Lin-Chen/ShareGPT4V"], "pipeline_tag": "image-text-to-text"}
mlx-community/llava-llama-3-8b-v1_1-4bit
null
[ "xtuner", "safetensors", "llava", "mlx", "image-text-to-text", "dataset:Lin-Chen/ShareGPT4V", "region:us" ]
null
2024-04-30T21:04:09+00:00
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - embracellm/sushi08_LoRA <Gallery /> ## Model description These are embracellm/sushi08_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of Green Veggie Roll to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](embracellm/sushi08_LoRA/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "openrail++", "library_name": "diffusers", "tags": ["text-to-image", "text-to-image", "diffusers-training", "diffusers", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "text-to-image", "diffusers-training", "diffusers", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "a photo of Green Veggie Roll ", "widget": []}
embracellm/sushi08_LoRA
null
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
null
2024-04-30T21:04:31+00:00
null
null
{"license": "mit"}
pcanete/profepato
null
[ "license:mit", "region:us" ]
null
2024-04-30T21:04:42+00:00
token-classification
transformers
{}
Negus/layoutlmv3-finetuned-cord_100
null
[ "transformers", "tensorboard", "safetensors", "layoutlmv3", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T21:06:14+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/bohdan-petryshyn/huggingface/runs/5ussv3qq) # codellama-7b-openapi-completion-ctx-lvl-prmt This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3210 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.2732 | 0.1 | 100 | 0.3538 | | 0.3278 | 0.2 | 200 | 0.3442 | | 0.2121 | 0.3 | 300 | 0.3424 | | 0.1887 | 0.4 | 400 | 0.3349 | | 0.1218 | 0.5 | 500 | 0.3509 | | 0.0896 | 0.6 | 600 | 0.3503 | | 0.3471 | 0.7 | 700 | 0.3320 | | 0.2532 | 0.8 | 800 | 0.3259 | | 0.21 | 0.9 | 900 | 0.3226 | | 0.2608 | 1.0 | 1000 | 0.3210 | ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.41.0.dev0 - Pytorch 2.2.2+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "llama2", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "codellama/CodeLlama-7b-hf", "model-index": [{"name": "codellama-7b-openapi-completion-ctx-lvl-prmt", "results": []}]}
BohdanPetryshyn/codellama-7b-openapi-completion-ctx-lvl-prmt
null
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:codellama/CodeLlama-7b-hf", "license:llama2", "region:us" ]
null
2024-04-30T21:06:21+00:00
text-classification
transformers
# merge_out This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [mllm-dev/merge_diff_data_DROID](https://huggingface.co/mllm-dev/merge_diff_data_DROID) as a base. ### Models Merged The following models were included in the merge: * [mllm-dev/merge_diff_data_YELP](https://huggingface.co/mllm-dev/merge_diff_data_YELP) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: mllm-dev/merge_diff_data_DROID dtype: float16 merge_method: ties slices: - sources: - layer_range: [0, 12] model: mllm-dev/merge_diff_data_DROID parameters: weight: 0.5 - layer_range: [0, 12] model: mllm-dev/merge_diff_data_YELP parameters: weight: 0.5 ```
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["mllm-dev/merge_diff_data_DROID", "mllm-dev/merge_diff_data_YELP"]}
mllm-dev/merge_yelp_droid_ties_2
null
[ "transformers", "safetensors", "gpt2", "text-classification", "mergekit", "merge", "arxiv:2306.01708", "base_model:mllm-dev/merge_diff_data_DROID", "base_model:mllm-dev/merge_diff_data_YELP", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T21:09:23+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
lunarsylph/stablecell_v56
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T21:09:44+00:00
text-to-image
diffusers
# Real Dream SDXL API Inference ![generated from modelslab.com](https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/generations/16275139341714511316.png) ## Get API Key Get API key from [ModelsLab API](http://modelslab.com), No Payment needed. Replace Key in below code, change **model_id** to "real-dream-sdxl" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs) Try model for free: [Generate Images](https://modelslab.com/models/real-dream-sdxl) Model link: [View model](https://modelslab.com/models/real-dream-sdxl) View all models: [View Models](https://modelslab.com/models) import requests import json url = "https://modelslab.com/api/v6/images/text2img" payload = json.dumps({ "key": "your_api_key", "model_id": "real-dream-sdxl", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
{"license": "creativeml-openrail-m", "tags": ["modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic"], "pinned": true}
stablediffusionapi/real-dream-sdxl
null
[ "diffusers", "modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
null
2024-04-30T21:11:15+00:00
null
null
{}
magnifi/llama-cls-ner-mt-chat-v21-2_epoch_24-ct2
null
[ "region:us" ]
null
2024-04-30T21:12:00+00:00
null
null
{}
amoryooyu/fipe
null
[ "region:us" ]
null
2024-04-30T21:12:12+00:00
feature-extraction
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
claudios/CodeGPT-small-py
null
[ "transformers", "safetensors", "gpt2", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T21:12:55+00:00
null
null
{}
SeriusTr/luz
null
[ "region:us" ]
null
2024-04-30T21:13:25+00:00
null
null
{"license": "openrail"}
illokeonds/rvc
null
[ "license:openrail", "region:us" ]
null
2024-04-30T21:14:21+00:00
null
null
{}
lunarsylph/stabletemp_v1
null
[ "region:us" ]
null
2024-04-30T21:15:50+00:00
null
null
{}
fatcat1337/dreamshaper-8-onnx
null
[ "onnx", "region:us" ]
null
2024-04-30T21:16:18+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
lunarsylph/moontemp_v1
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T21:16:24+00:00
image-segmentation
pytorch
![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/deeplabv3_plus_mobilenet_quantized/web-assets/model_demo.png) # DeepLabV3-Plus-MobileNet-Quantized: Optimized for Mobile Deployment ## Quantized Deep Convolutional Neural Network model for semantic segmentation DeepLabV3 Quantized is designed for semantic segmentation at multiple scales, trained on various datasets. It uses MobileNet as a backbone. This model is an implementation of DeepLabV3-Plus-MobileNet-Quantized found [here](https://github.com/jfzhang95/pytorch-deeplab-xception). This repository provides scripts to run DeepLabV3-Plus-MobileNet-Quantized on Qualcomm® devices. More details on model performance across various devices, can be found [here](https://aihub.qualcomm.com/models/deeplabv3_plus_mobilenet_quantized). ### Model Details - **Model Type:** Semantic segmentation - **Model Stats:** - Model checkpoint: VOC2012 - Input resolution: 513x513 - Number of parameters: 5.80M - Model size: 6.04 MB | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model | ---|---|---|---|---|---|---|---| | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 3.523 ms | 0 - 2 MB | INT8 | NPU | [DeepLabV3-Plus-MobileNet-Quantized.tflite](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet-Quantized/blob/main/DeepLabV3-Plus-MobileNet-Quantized.tflite) | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 5.308 ms | 1 - 9 MB | INT8 | NPU | [DeepLabV3-Plus-MobileNet-Quantized.so](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet-Quantized/blob/main/DeepLabV3-Plus-MobileNet-Quantized.so) ## Installation This model can be installed as a Python package via pip. ```bash pip install qai-hub-models ``` ## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`. With this API token, you can configure your client to run models on the cloud hosted devices. ```bash qai-hub configure --api_token API_TOKEN ``` Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information. ## Demo off target The package contains a simple end-to-end demo that downloads pre-trained weights and runs this model on a sample input. ```bash python -m qai_hub_models.models.deeplabv3_plus_mobilenet_quantized.demo ``` The above demo runs a reference implementation of pre-processing, model inference, and post processing. **NOTE**: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above). ``` %run -m qai_hub_models.models.deeplabv3_plus_mobilenet_quantized.demo ``` ### Run model on a cloud-hosted device In addition to the demo, you can also run the model on a cloud-hosted Qualcomm® device. This script does the following: * Performance check on-device on a cloud-hosted device * Downloads compiled assets that can be deployed on-device for Android. * Accuracy check between PyTorch and on-device outputs. ```bash python -m qai_hub_models.models.deeplabv3_plus_mobilenet_quantized.export ``` ``` Profile Job summary of DeepLabV3-Plus-MobileNet-Quantized -------------------------------------------------- Device: QCS8550 (Proxy) (12) Estimated Inference Time: 3.53 ms Estimated Peak Memory Range: 0.01-16.76 MB Compute Units: NPU (99) | Total (99) Profile Job summary of DeepLabV3-Plus-MobileNet-Quantized -------------------------------------------------- Device: QCS8550 (Proxy) (12) Estimated Inference Time: 5.30 ms Estimated Peak Memory Range: 0.79-13.51 MB Compute Units: NPU (100) | Total (100) ``` ## How does this work? This [export script](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/DeepLabV3-Plus-MobileNet-Quantized/export.py) leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model on-device. Lets go through each step below in detail: Step 1: **Compile model for on-device deployment** To compile a PyTorch model for on-device deployment, we first trace the model in memory using the `jit.trace` and then call the `submit_compile_job` API. ```python import torch import qai_hub as hub from qai_hub_models.models.deeplabv3_plus_mobilenet_quantized import Model # Load the model torch_model = Model.from_pretrained() torch_model.eval() # Device device = hub.Device("Samsung Galaxy S23") # Trace model input_shape = torch_model.get_input_spec() sample_inputs = torch_model.sample_inputs() pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()]) # Compile model on a specific device compile_job = hub.submit_compile_job( model=pt_model, device=device, input_specs=torch_model.get_input_spec(), ) # Get target model to run on-device target_model = compile_job.get_target_model() ``` Step 2: **Performance profiling on cloud-hosted device** After compiling models from step 1. Models can be profiled model on-device using the `target_model`. Note that this scripts runs the model on a device automatically provisioned in the cloud. Once the job is submitted, you can navigate to a provided job URL to view a variety of on-device performance metrics. ```python profile_job = hub.submit_profile_job( model=target_model, device=device, ) ``` Step 3: **Verify on-device accuracy** To verify the accuracy of the model on-device, you can run on-device inference on sample input data on the same cloud hosted device. ```python input_data = torch_model.sample_inputs() inference_job = hub.submit_inference_job( model=target_model, device=device, inputs=input_data, ) on_device_output = inference_job.download_output_data() ``` With the output of the model, you can compute like PSNR, relative errors or spot check the output with expected output. **Note**: This on-device profiling and inference requires access to Qualcomm® AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup). ## Run demo on a cloud-hosted device You can also run the demo on-device. ```bash python -m qai_hub_models.models.deeplabv3_plus_mobilenet_quantized.demo --on-device ``` **NOTE**: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above). ``` %run -m qai_hub_models.models.deeplabv3_plus_mobilenet_quantized.demo -- --on-device ``` ## Deploying compiled model to Android The models can be deployed using multiple runtimes: - TensorFlow Lite (`.tflite` export): [This tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a guide to deploy the .tflite model in an Android application. - QNN (`.so` export ): This [sample app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html) provides instructions on how to use the `.so` shared library in an Android application. ## View on Qualcomm® AI Hub Get more details on DeepLabV3-Plus-MobileNet-Quantized's performance across various devices [here](https://aihub.qualcomm.com/models/deeplabv3_plus_mobilenet_quantized). Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/) ## License - The license for the original implementation of DeepLabV3-Plus-MobileNet-Quantized can be found [here](https://github.com/quic/aimet-model-zoo/blob/develop/LICENSE.pdf). - The license for the compiled assets for on-device deployment can be found [here]({deploy_license_url}) ## References * [Rethinking Atrous Convolution for Semantic Image Segmentation](https://arxiv.org/abs/1706.05587) * [Source Model Implementation](https://github.com/jfzhang95/pytorch-deeplab-xception) ## Community * Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:[email protected]).
{"license": "mit", "library_name": "pytorch", "tags": ["quantized", "android"], "datasets": ["VOC2012"], "pipeline_tag": "image-segmentation"}
qualcomm/DeepLabV3-Plus-MobileNet-Quantized
null
[ "pytorch", "tflite", "quantized", "android", "image-segmentation", "dataset:VOC2012", "arxiv:1706.05587", "license:mit", "region:us" ]
null
2024-04-30T21:16:43+00:00
feature-extraction
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
claudios/CodeGPT-small-java
null
[ "transformers", "safetensors", "gpt2", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T21:17:12+00:00
null
null
{"license": "unknown"}
hautc/z4
null
[ "license:unknown", "region:us" ]
null
2024-04-30T21:18:14+00:00
null
null
{}
t4coxt00t/Radio_Daemon
null
[ "tensorboard", "safetensors", "region:us" ]
null
2024-04-30T21:19:12+00:00
feature-extraction
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
claudios/CodeGPT-Multilingual
null
[ "transformers", "safetensors", "gpt2", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T21:20:43+00:00
feature-extraction
transformers
## Model Description We introduce Dragon-multiturn, a retriever specifically designed for the conversational QA scenario. It can handle conversational query which combine dialogue history with the current query. It is built on top of the [Dragon](https://huggingface.co/facebook/dragon-plus-query-encoder) retriever. The details of Dragon-multiturn can be found in [here](https://arxiv.org/abs/2401.10225). **Please note that this repository is for the context encoder of Dragon-multiturn, and we use a separate model for the query encoder, which can be found [here](https://huggingface.co/nvidia/dragon-multiturn-query-encoder).** ## Other Resources [Llama3-ChatQA-1.5-8B](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-8B) &ensp; [Llama3-ChatQA-1.5-70B](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-70B) &ensp; [Evaluation Data](https://huggingface.co/datasets/nvidia/ConvRAG-Bench) &ensp; [Training Data](https://huggingface.co/datasets/nvidia/ChatQA-Training-Data) ## Benchmark Results <style type="text/css"> .tg {border:none;border-collapse:collapse;border-spacing:0;} .tg td{border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;overflow:hidden; padding:10px 5px;word-break:normal;} .tg th{border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;font-weight:normal; overflow:hidden;padding:10px 5px;word-break:normal;} .tg .tg-c3ow{border-color:inherit;text-align:center;vertical-align:center} .tg .tg-0pky{border-color:inherit;text-align:left;vertical-align:center} </style> <table class="tg"> <thead> <tr> <th class="tg-0pky" rowspan="2"></th> <th class="tg-c3ow" colspan="2">Average</th> <th class="tg-c3ow" colspan="2">Doc2Dial</th> <th class="tg-c3ow" colspan="2">QuAC</th> <th class="tg-c3ow" colspan="2">QReCC</th> <th class="tg-c3ow" colspan="2">TopiOCQA</th> <th class="tg-c3ow" colspan="2">INSCIT</th> </tr> <tr> <th class="tg-c3ow">top-1</th> <th class="tg-c3ow">top-5</th> <th class="tg-c3ow">top-1</th> <th class="tg-c3ow">top-5</th> <th class="tg-c3ow">top-1</th> <th class="tg-c3ow">top-5</th> <th class="tg-c3ow">top-1</th> <th class="tg-c3ow">top-5</th> <th class="tg-c3ow">top-5*</th> <th class="tg-c3ow">top-20*</th> <th class="tg-c3ow">top-5*</th> <th class="tg-c3ow">top-20*</th> </tr> </thead> <tbody> <tr> <td class="tg-0pky">Dragon</td> <td class="tg-c3ow">46.3</td> <td class="tg-c3ow">73.1</td> <td class="tg-c3ow">43.3</td> <td class="tg-c3ow">75.6</td> <td class="tg-c3ow">56.8</td> <td class="tg-c3ow">82.9</td> <td class="tg-c3ow">46.2</td> <td class="tg-c3ow">82.0</td> <td class="tg-c3ow">57.7</td> <td class="tg-c3ow">78.8</td> <td class="tg-c3ow">27.5</td> <td class="tg-c3ow">46.2</td> </tr> <tr> <td class="tg-0pky">Dragon-multiturn</td> <td class="tg-c3ow">53.0</td> <td class="tg-c3ow">81.2</td> <td class="tg-c3ow">48.6</td> <td class="tg-c3ow">83.5</td> <td class="tg-c3ow">54.8</td> <td class="tg-c3ow">83.2</td> <td class="tg-c3ow">49.6</td> <td class="tg-c3ow">86.7</td> <td class="tg-c3ow">64.5</td> <td class="tg-c3ow">85.2</td> <td class="tg-c3ow">47.4</td> <td class="tg-c3ow">67.1</td> </tr> </tbody> </table> Retrieval results across five multi-turn QA datasets (Doc2Dial, QuAC, QReCC, TopiOCQA, INSCIT) with the average top-1 and top-5 recall scores. *Since the average context length in TopiOCQA and INSCIT is smaller than in other datasets, we report top-5 and top-20 to roughly match the context lengths of top-1 and top-5, respectively, in those datasets. ## How to use ```python import torch from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('nvidia/dragon-multiturn-query-encoder') query_encoder = AutoModel.from_pretrained('nvidia/dragon-multiturn-query-encoder') context_encoder = AutoModel.from_pretrained('nvidia/dragon-multiturn-context-encoder') query = [ {"role": "user", "content": "I need help planning my Social Security benefits for my survivors."}, {"role": "agent", "content": "Are you currently planning for your future?"}, {"role": "user", "content": "Yes, I am."} ] contexts = [ "Benefits Planner: Survivors | Planning For Your Survivors \nAs you plan for the future , you'll want to think about what your family would need if you should die now. Social Security can help your family if you have earned enough Social Security credits through your work. You can earn up to four credits each year. In 2019 , for example , you earn one credit for each $1,360 of wages or self - employment income. When you have earned $5,440 , you have earned your four credits for the year. The number of credits needed to provide benefits for your survivors depends on your age when you die. No one needs more than 40 credits 10 years of work to be eligible for any Social Security benefit. But , the younger a person is , the fewer credits they must have for family members to receive survivors benefits. Benefits can be paid to your children and your spouse who is caring for the children even if you don't have the required number of credits. They can get benefits if you have credit for one and one - half years of work 6 credits in the three years just before your death. For Your Widow Or Widower \nThere are about five million widows and widowers receiving monthly Social Security benefits based on their deceased spouse's earnings record.", "Benefits Planner: Retirement \nOther Things to Consider \nWhat Is The Best Age To Start Your Benefits? The answer is that there is no one \" best age \" for everyone and, ultimately, it is your choice. You should make an informed decision about when to apply for benefits based on your individual and family circumstances. Your monthly benefit amount can differ substantially based on the age when you start receiving benefits. If you decide to start benefits : before your full retirement age , your benefit will be smaller but you will receive it for a longer period of time. at your full retirement age or later , you will receive a larger monthly benefit for a shorter period of time. The amount you receive when you first get benefits sets the base for the amount you will receive for the rest of your life. You may want to consider the following when you make that decision : If you plan to continue working , there are limits on how much you can earn each year between age 62 and full retirement age and still get all your benefits. Depending on the amount of your benefit and your earnings for the year , you may have to give up some of your benefits." ] ## convert query into a format as follows: ## user: {user}\nagent: {agent}\nuser: {user} formatted_query = '\n'.join([turn['role'] + ": " + turn['content'] for turn in query]).strip() ## get query and context embeddings query_input = tokenizer(formatted_query, return_tensors='pt') ctx_input = tokenizer(contexts, padding=True, truncation=True, max_length=512, return_tensors='pt') query_emb = query_encoder(**query_input).last_hidden_state[:, 0, :] # (1, emb_dim) ctx_emb = context_encoder(**ctx_input).last_hidden_state[:, 0, :] # (num_ctx, emb_dim) ## Compute similarity scores using dot product similarities = query_emb.matmul(ctx_emb.transpose(0, 1)) # (1, num_ctx) ## rank the similarity (from highest to lowest) ranked_results = torch.argsort(similarities, dim=-1, descending=True) # (1, num_ctx) ``` ## License Dragon-multiturn is built on top of [Dragon](https://arxiv.org/abs/2302.07452). We refer users to the original license of the Dragon model. ## Correspondence to Zihan Liu ([email protected]), Wei Ping ([email protected]) ## Citation <pre> @article{liu2024chatqa, title={ChatQA: Building GPT-4 Level Conversational QA Models}, author={Liu, Zihan and Ping, Wei and Roy, Rajarshi and Xu, Peng and Lee, Chankyu and Shoeybi, Mohammad and Catanzaro, Bryan}, journal={arXiv preprint arXiv:2401.10225}, year={2024}} </pre>
{"language": ["en"], "license": ["other"], "tag": ["dragon", "retriever", "conversation", "multi-turn", "conversational query"]}
nvidia/dragon-multiturn-context-encoder
null
[ "transformers", "pytorch", "bert", "feature-extraction", "en", "arxiv:2401.10225", "arxiv:2302.07452", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-30T21:21:45+00:00
null
null
{}
acook0011/arxiv_summarization_model
null
[ "region:us" ]
null
2024-04-30T21:22:46+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
OwOOwO/finalupdatec1
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T21:23:02+00:00
sentence-similarity
sentence-transformers
# luiz-and-robert-thesis/mpnet-frozen-newtriplets-lr-2e-7-m-1-e-5 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('luiz-and-robert-thesis/mpnet-frozen-newtriplets-lr-2e-7-m-1-e-5') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=luiz-and-robert-thesis/mpnet-frozen-newtriplets-lr-2e-7-m-1-e-5) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 5885 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.TripletLoss.TripletLoss` with parameters: ``` {'distance_metric': 'TripletDistanceMetric.COSINE', 'triplet_margin': 1} ``` Parameters of the fit()-Method: ``` { "epochs": 5, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-07 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 4413, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
{"library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"}
luiz-and-robert-thesis/mpnet-frozen-newtriplets-lr-2e-7-m-1-e-5
null
[ "sentence-transformers", "safetensors", "mpnet", "feature-extraction", "sentence-similarity", "endpoints_compatible", "region:us" ]
null
2024-04-30T21:23:24+00:00
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results01 This model is a fine-tuned version of [EleutherAI/pythia-410m](https://huggingface.co/EleutherAI/pythia-410m) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.30.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.13.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "results01", "results": []}]}
KaKashii/results01
null
[ "pytorch", "generated_from_trainer", "license:apache-2.0", "region:us" ]
null
2024-04-30T21:24:26+00:00
image-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.1626 - Accuracy: 0.6212 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | No log | 0.8421 | 4 | 1.1626 | 0.6212 | | No log | 1.8947 | 9 | 1.1700 | 0.6212 | | 1.2355 | 2.5263 | 12 | 1.1713 | 0.6212 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "microsoft/swin-tiny-patch4-window7-224", "model-index": [{"name": "swin-tiny-patch4-window7-224-finetuned-eurosat", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.6212121212121212, "name": "Accuracy"}]}]}]}
dogukanbas/swin-tiny-patch4-window7-224-finetuned-eurosat
null
[ "transformers", "tensorboard", "safetensors", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swin-tiny-patch4-window7-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T21:25:39+00:00
null
null
Number of experts present in the library: 10 | Expert Name | Base Model | Trained on | Adapter Type | | --- | --- | --- | --- | | phi2_joint_3epoch_sim_cluster_10 | phi-2 | sordonia/flan-10k-flat/dream_read_the_following_conversation_and_answer_the_question,app_reviews_convert_to_star_rating,cos_e_v1_11_question_option_description_text,social_i_qa_Show_choices_and_generate_answer,quartz_answer_question_based_on,sciq_Direct_Question_Closed_Book_,qasc_qa_with_separated_facts_3,quartz_given_the_fact_answer_the_q,quartz_answer_question_below,kilt_tasks_hotpotqa_final_exam,sciq_Multiple_Choice,wiqa_does_the_supposed_perturbation_have_an_effect,cos_e_v1_11_question_description_option_text,wiki_qa_Is_This_True_,quartz_use_info_from_question_paragraph,sciq_Direct_Question,qasc_qa_with_separated_facts_2,wiqa_which_of_the_following_is_the_supposed_perturbation,app_reviews_convert_to_rating,cos_e_v1_11_question_option_description_id,wiqa_effect_with_string_answer,qasc_qa_with_separated_facts_5,dream_baseline,quartz_having_read_above_passage,cos_e_v1_11_question_description_option_id,qasc_qa_with_separated_facts_1,cos_e_v1_11_description_question_option_text,qasc_qa_with_combined_facts_1,qasc_is_correct_1,cos_e_v1_11_description_question_option_id,social_i_qa_Check_if_a_random_answer_is_valid_or_not,sciq_Multiple_Choice_Closed_Book_,quartz_use_info_from_paragraph_question,qasc_is_correct_2,qasc_qa_with_separated_facts_4,quartz_read_passage_below_choose,quartz_paragraph_question_plain_concat,sciq_Multiple_Choice_Question_First | lora | | phi2_joint_3epoch_sim_cluster_3 | phi-2 | sordonia/flan-10k-flat/wiki_qa_found_on_google,app_reviews_categorize_rating_using_review,race_middle_Is_this_the_right_answer,super_glue_cb_1_0_2,wiki_qa_Topic_Prediction_Answer_Only,wiki_qa_Direct_Answer_to_Question,super_glue_wsc_fixed_1_0_2,cot_gsm8k_ii,unified_qa_science_inst,race_high_Is_this_the_right_answer,cot_strategyqa,cot_ecqa_ii,quarel_do_not_use,wiki_qa_exercise,wiki_qa_automatic_system,cot_creak_ii,quarel_heres_a_story,quarel_choose_between,stream_qed_ii,wiki_qa_Topic_Prediction_Question_Only,glue_qnli_2_0_0,cot_sensemaking_ii,super_glue_copa_1_0_2,social_i_qa_Generate_the_question_from_the_answer,social_i_qa_Show_choices_and_generate_index,quarel_testing_students,wiki_qa_Topic_Prediction_Question_and_Answer_Pair,wiki_qa_Decide_good_answer,wiki_qa_Jeopardy_style,wiki_qa_Generate_Question_from_Topic,definite_pronoun_resolution_1_1_0,wiqa_effect_with_label_answer,glue_wnli_2_0_0,cot_qasc,cot_strategyqa_ii,quarel_logic_test,stream_aqua_ii | lora | | phi2_joint_3epoch_sim_cluster_9 | phi-2 | sordonia/flan-10k-flat/super_glue_rte_1_0_2,cot_sensemaking,super_glue_wic_1_0_2,cos_e_v1_11_rationale,anli_r3_0_1_0,dream_generate_last_utterance,paws_wiki_1_1_0,cos_e_v1_11_generate_explanation_given_text,cot_creak,stream_aqua,snli_1_1_0,cos_e_v1_11_i_think,glue_qqp_2_0_0,cos_e_v1_11_explain_why_human,anli_r2_0_1_0,anli_r1_0_1_0,glue_stsb_2_0_0,cos_e_v1_11_aligned_with_common_sense,glue_mnli_2_0_0,social_i_qa_I_was_wondering,cosmos_qa_1_0_0,glue_mrpc_2_0_0,social_i_qa_Generate_answer | lora | | phi2_joint_3epoch_sim_cluster_1 | phi-2 | sordonia/flan-10k-flat/natural_questions_open_1_0_0,web_questions_whats_the_answer,web_questions_question_answer,dbpedia_14_pick_one_category_for_the_following_text,kilt_tasks_hotpotqa_combining_facts,web_questions_short_general_knowledge_q,kilt_tasks_hotpotqa_straighforward_qa,adversarial_qa_dbidaf_generate_question,adversarial_qa_droberta_based_on,web_questions_get_the_answer,kilt_tasks_hotpotqa_complex_question,web_questions_potential_correct_answer,trivia_qa_rc_1_1_0,kilt_tasks_hotpotqa_formulate,adversarial_qa_dbert_based_on,adversarial_qa_dbidaf_based_on,squad_v1_1_3_0_0 | lora | | phi2_joint_3epoch_sim_cluster_5 | phi-2 | sordonia/flan-10k-flat/race_middle_Read_the_article_and_answer_the_question_no_option_,race_high_Select_the_best_answer,quail_description_context_question_answer_id,quail_context_question_description_text,race_high_Read_the_article_and_answer_the_question_no_option_,race_high_Select_the_best_answer_no_instructions_,quail_context_description_question_answer_id,race_high_Taking_a_test,super_glue_multirc_1_0_2,race_middle_Select_the_best_answer,quail_context_question_description_answer_id,quail_description_context_question_answer_text,quail_context_question_answer_description_text,race_high_Select_the_best_answer_generate_span_,race_middle_Select_the_best_answer_generate_span_,quail_context_question_answer_description_id,quail_context_description_question_answer_text,quail_context_description_question_text,quail_context_question_description_answer_text,quail_description_context_question_text,race_middle_Taking_a_test,quail_no_prompt_id,quail_no_prompt_text,race_middle_Select_the_best_answer_no_instructions_ | lora | | phi2_joint_3epoch_sim_cluster_8 | phi-2 | sordonia/flan-10k-flat/ropes_background_new_situation_answer,ropes_prompt_bottom_no_hint,ropes_plain_background_situation,ropes_new_situation_background_answer,ropes_given_background_situation,ropes_prompt_bottom_hint_beginning,ropes_prompt_beginning,ropes_read_background_situation,ropes_plain_bottom_hint,ropes_plain_no_background,ropes_prompt_mix,ropes_background_situation_middle | lora | | phi2_joint_3epoch_sim_cluster_2 | phi-2 | sordonia/flan-10k-flat/adversarial_qa_dbidaf_question_context_answer,super_glue_record_1_0_2,wiki_hop_original_generate_object,adversarial_qa_droberta_tell_what_it_is,dbpedia_14_given_a_choice_of_categories_,wiki_hop_original_choose_best_object_affirmative_3,quac_1_0_0,wiki_hop_original_choose_best_object_interrogative_1,wiki_hop_original_choose_best_object_affirmative_1,adversarial_qa_dbert_answer_the_following_q,wiki_hop_original_choose_best_object_interrogative_2,adversarial_qa_droberta_question_context_answer,squad_v2_0_3_0_0,wiki_hop_original_generate_subject,wiki_bio_guess_person,adversarial_qa_dbidaf_answer_the_following_q,adversarial_qa_droberta_answer_the_following_q,adversarial_qa_dbert_tell_what_it_is,race_high_Write_a_multi_choice_question_options_given_,wiki_hop_original_choose_best_object_affirmative_2,wiki_hop_original_generate_subject_and_object,drop_2_0_0,adversarial_qa_dbert_question_context_answer,adversarial_qa_dbidaf_tell_what_it_is | lora | | phi2_joint_3epoch_sim_cluster_7 | phi-2 | sordonia/flan-10k-flat/glue_sst2_2_0_0,adversarial_qa_droberta_generate_question,true_case,stream_qed,huggingface_xsum,cot_esnli,cot_gsm8k,trec_1_0_0,yelp_polarity_reviews_0_2_0,lambada_1_0_0,glue_cola_2_0_0,ag_news_subset_1_0_0,gem_dart_1_1_0,math_dataset_algebra__linear_1d_1_0_0,cnn_dailymail_3_4_0,wiki_hop_original_explain_relation,dbpedia_14_given_list_what_category_does_the_paragraph_belong_to,gem_wiki_lingua_english_en_1_1_0,fix_punct,imdb_reviews_plain_text_1_0_0,race_middle_Write_a_multi_choice_question_for_the_following_article,gigaword_1_2_0,dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to,gem_web_nlg_en_1_1_0,word_segment,race_high_Write_a_multi_choice_question_for_the_following_article,wmt16_translate_de_en_1_0_0,cot_ecqa,aeslc_1_0_0,dream_generate_first_utterance,wmt16_translate_fi_en_1_0_0,dream_answer_to_dialogue,para_crawl_enes,adversarial_qa_dbert_generate_question,race_middle_Write_a_multi_choice_question_options_given_,wmt14_translate_fr_en_1_0_0 | lora | | phi2_joint_3epoch_sim_cluster_6 | phi-2 | sordonia/flan-10k-flat/quoref_Context_Contains_Answer,duorc_SelfRC_generate_question_by_answer,quoref_Find_Answer,duorc_ParaphraseRC_movie_director,duorc_ParaphraseRC_answer_question,quoref_Found_Context_Online,quoref_Read_And_Extract_,duorc_ParaphraseRC_title_generation,duorc_ParaphraseRC_decide_worth_it,quoref_What_Is_The_Answer,duorc_ParaphraseRC_generate_question,quoref_Guess_Title_For_Context,quoref_Answer_Test,duorc_SelfRC_question_answering,duorc_SelfRC_title_generation,duorc_ParaphraseRC_generate_question_by_answer,duorc_ParaphraseRC_extract_answer,duorc_SelfRC_answer_question,duorc_SelfRC_decide_worth_it,duorc_ParaphraseRC_question_answering,quoref_Answer_Question_Given_Context,duorc_SelfRC_extract_answer,quoref_Guess_Answer,quoref_Answer_Friend_Question,duorc_SelfRC_movie_director,duorc_SelfRC_generate_question,quoref_Given_Context_Answer_Question | lora | | phi2_joint_3epoch_sim_cluster_4 | phi-2 | sordonia/flan-10k-flat/wiqa_what_might_be_the_first_step_of_the_process,wiqa_what_is_the_final_step_of_the_following_process,wmt16_translate_ro_en_1_0_0,wiqa_what_might_be_the_last_step_of_the_process,wiki_bio_key_content,gem_common_gen_1_1_0,duorc_SelfRC_build_story_around_qa,app_reviews_generate_review,wiki_bio_what_content,wiki_bio_who,gem_e2e_nlg_1_1_0,cot_esnli_ii,wmt16_translate_tr_en_1_0_0,wiqa_what_is_the_missing_first_step,wiki_bio_comprehension,coqa_1_0_0,duorc_ParaphraseRC_build_story_around_qa,multi_news_1_0_0 | lora | Last updated on: 2024-04-30 21:25:48+00:00
{}
zhan1993/library-phi_2-v3-10-flan-clusters
null
[ "region:us" ]
null
2024-04-30T21:25:48+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["trl", "sft"]}
EdBerg/001Llama3_b_finance_finetuned_test
null
[ "transformers", "safetensors", "trl", "sft", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-30T21:27:42+00:00
automatic-speech-recognition
transformers
{}
wraps/whisper-small-fr
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2024-04-30T21:28:45+00:00
question-answering
transformers
{}
yileitu/finetuned_qa_model
null
[ "transformers", "safetensors", "distilbert", "question-answering", "endpoints_compatible", "region:us" ]
null
2024-04-30T21:31:48+00:00
text-generation
transformers
## NumFaLM 3B NumFaLM 3B is a bilingual language model trained in Thai and English. The architecture model is Llama model that pretraining from scratch. It was built to open source AI and research for bilingual language models and improve small language models. We released the training script and train datasets so you can research the training and datasets. - GitHub: [https://github.com/wannaphong/NumFaLM](https://github.com/wannaphong/NumFaLM) - Training script: [https://github.com/wannaphong/EasyLM/tree/numfa_pretraining](https://github.com/wannaphong/EasyLM/tree/numfa_pretraining) - Train Datasets: [wannaphong/mark13](https://huggingface.co/datasets/wannaphong/mark13) We fork EasyLM and added training by HuggingFace datasets, but HuggingFace was down many times during the time we trained the model, so we can train just one epoch. The model trained one epoch. # Acknowledgements Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). We use TPU4-64 for training model about 4 days / 1 epoch. Thank you [TPU Research Cloud](https://sites.research.google/trc/about/) and [EasyLM project](https://github.com/young-geng/EasyLM)! We use EasyLM for pretraining model.
{"language": ["en", "th"], "license": "apache-2.0", "datasets": ["wannaphong/mark13"], "pipeline_tag": "text-generation"}
wannaphong/numfalm-3b
null
[ "transformers", "safetensors", "llama", "text-generation", "en", "th", "dataset:wannaphong/mark13", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T21:32:09+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # names-whisper-en This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1224 - Wer: 2.5974 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 0.0691 | 1.5576 | 1000 | 0.1273 | 3.1158 | | 0.0078 | 3.1153 | 2000 | 0.1186 | 2.6745 | | 0.004 | 4.6729 | 3000 | 0.1189 | 2.5386 | | 0.0013 | 6.2305 | 4000 | 0.1222 | 2.5839 | | 0.0011 | 7.7882 | 5000 | 0.1224 | 2.5974 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["wer"], "base_model": "openai/whisper-small", "model-index": [{"name": "names-whisper-en", "results": []}]}
seifooo/names-whisper-en
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-30T21:32:13+00:00
feature-extraction
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
claudios/plbart-base
null
[ "transformers", "safetensors", "plbart", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-30T21:32:17+00:00
text-generation
transformers
# CodeGemma Model Page : [CodeGemma](https://ai.google.dev/gemma/docs/codegemma) Resources and Technical Documentation : [Technical Report](https://goo.gle/codegemma) : [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) Terms of Use : [Terms](https://ai.google.dev/gemma/terms) Authors : Google ## Model Information Summary description and brief definition of inputs and outputs. ### Description CodeGemma is a collection of lightweight open code models built on top of Gemma. CodeGemma models are text-to-text and text-to-code decoder-only models and are available as a 7 billion pretrained variant that specializes in code completion and code generation tasks, a 7 billion parameter instruction-tuned variant for code chat and instruction following and a 2 billion parameter pretrained variant for fast code completion. | | [ **codegemma-2b** ](https://huggingface.co/google/codegemma-1.1-2b) | [codegemma-7b](https://huggingface.co/google/codegemma-7b) | [codegemma-7b-it](https://huggingface.co/google/codegemma-1.1-7b-it) | |----------------------------------|:----------------------------------------------------------------:|:----------------------------------------------------------:|:----------------------------------------------------------------:| | Code Completion | ✅ | ✅ | | | Generation from natural language | | ✅ | ✅ | | Chat | | | ✅ | | Instruction Following | | | ✅ | ### Sample Usage #### For Code Completion Code completion can be used for infilling inside code editors. CodeGemma was trained for this task using the fill-in-the-middle (FIM) objective, where you provide a prefix and a suffix as context for the completion. The following tokens are used to separate the different parts of the input: - `<|fim_prefix|>` precedes the context before the completion we want to run. - `<|fim_suffix|>` precedes the suffix. You must put this token exactly where the cursor would be positioned in an editor, as this is the location that will be completed by the model. - `<|fim_middle|>` is the prompt that invites the model to run the generation. In addition to these, there's also `<|file_separator|>`, which is used to provide multi-file contexts. Please, make sure to not provide any extra spaces or newlines around the tokens, other than those that would naturally occur in the code fragment you want to complete. Here's an example: ```python from transformers import GemmaTokenizer, AutoModelForCausalLM model_id = "google/codegemma-1.1-2b" tokenizer = GemmaTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) prompt = '''\ <|fim_prefix|>import datetime def calculate_age(birth_year): """Calculates a person's age based on their birth year.""" current_year = datetime.date.today().year <|fim_suffix|> return age<|fim_middle|>\ ''' inputs = tokenizer(prompt, return_tensors="pt").to(model.device) prompt_len = inputs["input_ids"].shape[-1] outputs = model.generate(**inputs, max_new_tokens=100) print(tokenizer.decode(outputs[0][prompt_len:])) ``` This may return something like the following: ``` age = current_year - birth_year<|file_separator|>test_calculate_age.py <|fim_suffix|> assert calculate_age(1990) == 33 assert calculate_age(1980) == 43 assert calculate_age(1970) == 53 assert calculate_age(1960) == 63 assert calculate_age(1950) == 73 ``` Note the extra content after the correct completion. The model returns the completion, followed by one of the FIM tokens or the EOS token. You should ignore everything that comes after any of these tokens. A good way to achieve this is by providing a list of terminators to the `generate` function, like this: ```python FIM_PREFIX = '<|fim_prefix|>' FIM_SUFFIX = '<|fim_suffix|>' FIM_MIDDLE = '<|fim_middle|>' FIM_FILE_SEPARATOR = '<|file_separator|>' terminators = tokenizer.convert_tokens_to_ids([FIM_PREFIX, FIM_MIDDLE, FIM_SUFFIX, FIM_FILE_SEPARATOR]) terminators += [tokenizer.eos_token_id] outputs = model.generate( **inputs, max_new_tokens=100, eos_token_id=terminators, ) ``` In this case, generation stops as soon as the first delimiter is found in the response: ``` age = current_year - birth_year<|file_separator|> ``` #### For Code Generation ```python from transformers import GemmaTokenizer, AutoModelForCausalLM tokenizer = GemmaTokenizer.from_pretrained("google/codegemma-1.1-2b") model = AutoModelForCausalLM.from_pretrained("google/codegemma-1.1-2b") input_text = "Write me a Python function to calculate the nth fibonacci number." input_ids = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` ### Inputs and Outputs Inputs : For pretrained model variants: code prefix and/or suffix for code completion and generation scenarios, or natural language text or prompt : For instruction tuned model variant: natural language text or prompt Outputs : For pretrained model variants: fill-in-the-middle code completion, code and natural language : For instruction tuned model variant: code and natural language ## Model Data Data used for model training and how the data was processed. ### Training Dataset Using Gemma as the base model, CodeGemma 2B and 7B pretrained variants are further trained on an additional 500 to 1000 billion tokens of primarily English language data from publicly available code repositories, open source mathematics datasets and synthetically generated code. ### Training Data Processing The following data pre-processing techniques were applied: * FIM Pretrained CodeGemma models focus on fill-in-the-middle (FIM) tasks. The models are trained to work with both PSM and SPM modes. Our FIM settings are 80% to 90% FIM rate with 50-50 PSM/SPM. * Dependency Graph-based Packing and Unit Test-based Lexical Packing techniques: To improve model alignment with real-world applications, we structured training examples at the project/repository level to co-locate the most relevant source files within each repository. Specifically, we employed two heuristic techniques: dependency graph-based packing and unit test-based lexical packing * We developed a novel technique for splitting the documents into prefix, middle, and suffix to make the suffix start in a more syntactically natural point rather than purely random distribution. * Safety: Similarly to Gemma, we deployed rigorous safety filtering including filtering personal data, CSAM filtering and other filtering based on content quality and safety in line with [our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11). ## Implementation Information Information about the hardware and software used to train the models. ### Hardware CodeGemma was trained using the latest generation of [Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e). ### Software Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/). ## Evaluation Information Model evaluation metrics and results. ### Evaluation Approach We evaluate CodeGemma on a variety of academic benchmarks across several domains: * Code completion benchmarks: HumanEval Single Line and Multiple Line Infilling * Code generation benchmarks: HumanEval, MBPP, BabelCode (C++, C#, Go, Java, JavaScript, Kotlin, Python, Rust) * Q&A: BoolQ, PIQA, TriviaQA * Natural Language: ARC-Challenge, HellaSwag, MMLU, WinoGrande * Math Reasoning: GSM8K, MATH ### Evaluation Results #### Coding Benchmarks Benchmark | [2B](https://huggingface.co/google/codegemma-2b) | [2B (1.1)](https://huggingface.co/google/codegemma-1.1-2b) | [7B](https://huggingface.co/google/codegemma-7b) | [7B-IT](https://huggingface.co/google/codegemma-7b-it) | [7B-IT (1.1)](https://huggingface.co/google/codegemma-1.1-7b-it) ----------------------|------|----------|------|-------|------------ HumanEval | 31.1 | 37.8 | 44.5 | 56.1 | 60.4 MBPP | 43.6 | 49.2 | 56.2 | 54.2 | 55.6 HumanEval Single Line | 78.4 | 79.3 | 76.1 | 68.3 | 77.4 HumanEval Multi Line | 51.4 | 51.0 | 58.4 | 20.1 | 23.7 BC HE C++ | 24.2 | 19.9 | 32.9 | 42.2 | 46.6 BC HE C# | 10.6 | 26.1 | 22.4 | 26.7 | 54.7 BC HE Go | 20.5 | 18.0 | 21.7 | 28.6 | 34.2 BC HE Java | 29.2 | 29.8 | 41.0 | 48.4 | 50.3 BC HE JavaScript | 21.7 | 28.0 | 39.8 | 46.0 | 48.4 BC HE Kotlin | 28.0 | 32.3 | 39.8 | 51.6 | 47.8 BC HE Python | 21.7 | 36.6 | 42.2 | 48.4 | 54.0 BC HE Rust | 26.7 | 24.2 | 34.1 | 36.0 | 37.3 BC MBPP C++ | 47.1 | 38.9 | 53.8 | 56.7 | 63.5 BC MBPP C# | 28.7 | 45.3 | 32.5 | 41.2 | 62.0 BC MBPP Go | 45.6 | 38.9 | 43.3 | 46.2 | 53.2 BC MBPP Java | 41.8 | 49.7 | 50.3 | 57.3 | 62.9 BC MBPP JavaScript | 45.3 | 45.0 | 58.2 | 61.4 | 61.4 BC MBPP Kotlin | 46.8 | 49.7 | 54.7 | 59.9 | 62.6 BC MBPP Python | 38.6 | 52.9 | 59.1 | 62.0 | 60.2 BC MBPP Rust | 45.3 | 47.4 | 52.9 | 53.5 | 52.3 #### Natural Language Benchmarks ![CodeGemma Natural Language Benchmarks](./codegemma_nl_benchmarks.png) ## Ethics and Safety Ethics and safety evaluation approach and results. ### Evaluation Approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: * Human evaluation on prompts covering content safety and representational harms. See the [Gemma model card](https://ai.google.dev/gemma/docs/model_card#evaluation_approach) for more details on evaluation approach. * Specific testing of cyber-offence capabilities, focusing on testing autonomous hacking capabilities and ensuring potential harms are limited. ### Evaluation Results The results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child safety, content safety, representational harms, memorization, large-scale harms. See the [Gemma model card](https://ai.google.dev/gemma/docs/model_card#evaluation_results) for more details. ## Model Usage & Limitations These models have certain limitations that users should be aware of. ### Intended Usage Code Gemma models have a wide range of applications, which vary between IT and PT models. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. Code Completion : PT models can be used to complete code with an IDE extension Code Generation : IT model can be used to generate code with or without an IDE extension Code Conversation : IT model can power conversation interfaces which discuss code. Code Education : IT model supports interactive code learning experiences, aids in syntax correction or provides coding practice. ### Known Limitations Large Language Models (LLMs) have limitations based on their training data and the inherent limitations of the technology. See the [Gemma model card](https://ai.google.dev/gemma/docs/model_card#evaluation_results) for more details on the limitations of LLMs. ### Ethical Considerations & Risks The development of large language models (LLMs) raises several ethical concerns. We have carefully considered multiple aspects in the development of these models. Please refer to [the same discussion](https://ai.google.dev/gemma/docs/model_card#ethical_considerations_and_risks) in the Gemma model card for model details. ### Benefits At the time of release, this family of models provides high-performance open code-focused large language model implementations designed from the ground up for Responsible AI development compared to similarly sized models. Using the coding benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives.
{"license": "gemma", "library_name": "transformers", "extra_gated_heading": "Access CodeGemma on Hugging Face", "extra_gated_prompt": "To access CodeGemma on Hugging Face, you\u2019re required to review and agree to Google\u2019s usage license. To do this, please ensure you\u2019re logged-in to Hugging Face and click below. Requests are processed immediately.", "extra_gated_button_content": "Acknowledge license", "license_link": "https://ai.google.dev/gemma/terms"}
google/codegemma-1.1-2b
null
[ "transformers", "safetensors", "gemma", "text-generation", "license:gemma", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T21:32:55+00:00
null
null
{}
dogukanbas/deit-base-distilled-patch16-224-finetuned-eurosat
null
[ "region:us" ]
null
2024-04-30T21:33:11+00:00
text-generation
transformers
# CodeGemma Model Page : [CodeGemma](https://ai.google.dev/gemma/docs/codegemma) Resources and Technical Documentation : [Technical Report](https://goo.gle/codegemma) : [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) Terms of Use : [Terms](https://ai.google.dev/gemma/terms) Authors : Google ## Model Information Summary description and brief definition of inputs and outputs. ### Description CodeGemma is a collection of lightweight open code models built on top of Gemma. CodeGemma models are text-to-text and text-to-code decoder-only models and are available as a 7 billion pretrained variant that specializes in code completion and code generation tasks, a 7 billion parameter instruction-tuned variant for code chat and instruction following and a 2 billion parameter pretrained variant for fast code completion. | | [ **codegemma-2b** ](https://huggingface.co/google/codegemma-1.1-2b) | [codegemma-7b](https://huggingface.co/google/codegemma-7b) | [codegemma-7b-it](https://huggingface.co/google/codegemma-1.1-7b-it) | |----------------------------------|:----------------------------------------------------------------:|:----------------------------------------------------------:|:----------------------------------------------------------------:| | Code Completion | ✅ | ✅ | | | Generation from natural language | | ✅ | ✅ | | Chat | | | ✅ | | Instruction Following | | | ✅ | ### Sample Usage This model is intended to answer questions about code fragments, to generate code from natural language, or to engage in a conversation with the user about programming or technical problems. If you need to use code completion (for example, integrated in an IDE), we recommend you use one of the pre-trained models instead: [CodeGemma 7B](https://huggingface.co/google/codegemma-7b), or [CodeGemma 2B](https://huggingface.co/google/codegemma-2b). #### For Code Generation ```python from transformers import GemmaTokenizer, AutoModelForCausalLM tokenizer = GemmaTokenizer.from_pretrained("google/codegemma-1.1-7b-it") model = AutoModelForCausalLM.from_pretrained("google/codegemma-1.1-7b-it") input_text = "Write me a Python function to calculate the nth fibonacci number." input_ids = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Chat Template The instruction-tuned models use a chat template that must be adhered to for conversational use. The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet. Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction: ```py from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model_id = "google/codegemma-1.1-7b-it" dtype = torch.bfloat16 tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, device_map="cuda", torch_dtype=dtype, ) chat = [ { "role": "user", "content": "Write a hello world program" }, ] prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) ``` At this point, the prompt contains the following text: ``` <bos><start_of_turn>user Write a hello world program<end_of_turn> <start_of_turn>model ``` As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity (either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with the `<end_of_turn>` token. You can follow this format to build the prompt manually, if you need to do it without the tokenizer's chat template. After the prompt is ready, generation can be performed like this: ```py inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150) ``` ### Inputs and Outputs Inputs : For pretrained model variants: code prefix and/or suffix for code completion and generation scenarios, or natural language text or prompt : For instruction tuned model variant: natural language text or prompt Outputs : For pretrained model variants: fill-in-the-middle code completion, code and natural language : For instruction tuned model variant: code and natural language ## Model Data Data used for model training and how the data was processed. ### Training Dataset Using Gemma as the base model, CodeGemma 2B and 7B pretrained variants are further trained on an additional 500 to 1000 billion tokens of primarily English language data from publicly available code repositories, open source mathematics datasets and synthetically generated code. ### Training Data Processing The following data pre-processing techniques were applied: * FIM Pretrained CodeGemma models focus on fill-in-the-middle (FIM) tasks. The models are trained to work with both PSM and SPM modes. Our FIM settings are 80% to 90% FIM rate with 50-50 PSM/SPM. * Dependency Graph-based Packing and Unit Test-based Lexical Packing techniques: To improve model alignment with real-world applications, we structured training examples at the project/repository level to co-locate the most relevant source files within each repository. Specifically, we employed two heuristic techniques: dependency graph-based packing and unit test-based lexical packing * We developed a novel technique for splitting the documents into prefix, middle, and suffix to make the suffix start in a more syntactically natural point rather than purely random distribution. * Safety: Similarly to Gemma, we deployed rigorous safety filtering including filtering personal data, CSAM filtering and other filtering based on content quality and safety in line with [our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11). ## Implementation Information Information about the hardware and software used to train the models. ### Hardware CodeGemma was trained using the latest generation of [Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e). ### Software Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/). ## Evaluation Information Model evaluation metrics and results. ### Evaluation Approach We evaluate CodeGemma on a variety of academic benchmarks across several domains: * Code completion benchmarks: HumanEval Single Line and Multiple Line Infilling * Code generation benchmarks: HumanEval, MBPP, BabelCode (C++, C#, Go, Java, JavaScript, Kotlin, Python, Rust) * Q&A: BoolQ, PIQA, TriviaQA * Natural Language: ARC-Challenge, HellaSwag, MMLU, WinoGrande * Math Reasoning: GSM8K, MATH ### Evaluation Results #### Coding Benchmarks Benchmark | [2B](https://huggingface.co/google/codegemma-2b) | [2B (1.1)](https://huggingface.co/google/codegemma-1.1-2b) | [7B](https://huggingface.co/google/codegemma-7b) | [7B-IT](https://huggingface.co/google/codegemma-7b-it) | [7B-IT (1.1)](https://huggingface.co/google/codegemma-1.1-7b-it) ----------------------|------|----------|------|-------|------------ HumanEval | 31.1 | 37.8 | 44.5 | 56.1 | 60.4 MBPP | 43.6 | 49.2 | 56.2 | 54.2 | 55.6 HumanEval Single Line | 78.4 | 79.3 | 76.1 | 68.3 | 77.4 HumanEval Multi Line | 51.4 | 51.0 | 58.4 | 20.1 | 23.7 BC HE C++ | 24.2 | 19.9 | 32.9 | 42.2 | 46.6 BC HE C# | 10.6 | 26.1 | 22.4 | 26.7 | 54.7 BC HE Go | 20.5 | 18.0 | 21.7 | 28.6 | 34.2 BC HE Java | 29.2 | 29.8 | 41.0 | 48.4 | 50.3 BC HE JavaScript | 21.7 | 28.0 | 39.8 | 46.0 | 48.4 BC HE Kotlin | 28.0 | 32.3 | 39.8 | 51.6 | 47.8 BC HE Python | 21.7 | 36.6 | 42.2 | 48.4 | 54.0 BC HE Rust | 26.7 | 24.2 | 34.1 | 36.0 | 37.3 BC MBPP C++ | 47.1 | 38.9 | 53.8 | 56.7 | 63.5 BC MBPP C# | 28.7 | 45.3 | 32.5 | 41.2 | 62.0 BC MBPP Go | 45.6 | 38.9 | 43.3 | 46.2 | 53.2 BC MBPP Java | 41.8 | 49.7 | 50.3 | 57.3 | 62.9 BC MBPP JavaScript | 45.3 | 45.0 | 58.2 | 61.4 | 61.4 BC MBPP Kotlin | 46.8 | 49.7 | 54.7 | 59.9 | 62.6 BC MBPP Python | 38.6 | 52.9 | 59.1 | 62.0 | 60.2 BC MBPP Rust | 45.3 | 47.4 | 52.9 | 53.5 | 52.3 #### Natural Language Benchmarks ![CodeGemma Natural Language Benchmarks](./codegemma_nl_benchmarks.png) ## Ethics and Safety Ethics and safety evaluation approach and results. ### Evaluation Approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: * Human evaluation on prompts covering content safety and representational harms. See the [Gemma model card](https://ai.google.dev/gemma/docs/model_card#evaluation_approach) for more details on evaluation approach. * Specific testing of cyber-offence capabilities, focusing on testing autonomous hacking capabilities and ensuring potential harms are limited. ### Evaluation Results The results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child safety, content safety, representational harms, memorization, large-scale harms. See the [Gemma model card](https://ai.google.dev/gemma/docs/model_card#evaluation_results) for more details. ## Model Usage & Limitations These models have certain limitations that users should be aware of. ### Intended Usage Code Gemma models have a wide range of applications, which vary between IT and PT models. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. Code Completion : PT models can be used to complete code with an IDE extension Code Generation : IT model can be used to generate code with or without an IDE extension Code Conversation : IT model can power conversation interfaces which discuss code. Code Education : IT model supports interactive code learning experiences, aids in syntax correction or provides coding practice. ### Known Limitations Large Language Models (LLMs) have limitations based on their training data and the inherent limitations of the technology. See the [Gemma model card](https://ai.google.dev/gemma/docs/model_card#evaluation_results) for more details on the limitations of LLMs. ### Ethical Considerations & Risks The development of large language models (LLMs) raises several ethical concerns. We have carefully considered multiple aspects in the development of these models. Please refer to [the same discussion](https://ai.google.dev/gemma/docs/model_card#ethical_considerations_and_risks) in the Gemma model card for model details. ### Benefits At the time of release, this family of models provides high-performance open code-focused large language model implementations designed from the ground up for Responsible AI development compared to similarly sized models. Using the coding benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives.
{"license": "gemma", "library_name": "transformers", "extra_gated_heading": "Access CodeGemma on Hugging Face", "extra_gated_prompt": "To access CodeGemma on Hugging Face, you\u2019re required to review and agree to Google\u2019s usage license. To do this, please ensure you\u2019re logged-in to Hugging Face and click below. Requests are processed immediately.", "extra_gated_button_content": "Acknowledge license", "pipeline_tag": "text-generation", "widget": [{"text": "<start_of_turn>user Write a Python function to calculate the nth fibonacci number.<end_of_turn> <start_of_turn>model\n"}], "inference": {"parameters": {"max_new_tokens": 200}}, "license_link": "https://ai.google.dev/gemma/terms"}
google/codegemma-1.1-7b-it
null
[ "transformers", "safetensors", "gemma", "text-generation", "license:gemma", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T21:33:23+00:00
null
null
{"license": "other", "license_name": "my-license", "license_link": "LICENSE"}
Nidhushan/test_model
null
[ "doi:10.57967/hf/2144", "license:other", "region:us" ]
null
2024-04-30T21:33:31+00:00
null
null
{}
IA-GAB/CRAVITY
null
[ "region:us" ]
null
2024-04-30T21:34:35+00:00
null
null
{}
HenryCai1129/adapter-llama-adaptertoxic2nontoxic-100-50-0.0009
null
[ "region:us" ]
null
2024-04-30T21:36:11+00:00
reinforcement-learning
stable-baselines3
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga hui168 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga hui168 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga hui168 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 10000), ('n_timesteps', 100000), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
{"library_name": "stable-baselines3", "tags": ["SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "DQN", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "SpaceInvadersNoFrameskip-v4", "type": "SpaceInvadersNoFrameskip-v4"}, "metrics": [{"type": "mean_reward", "value": "320.00 +/- 138.20", "name": "mean_reward", "verified": false}]}]}]}
hui168/dqn-SpaceInvadersNoFrameskip-colab
null
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-30T21:36:28+00:00
null
null
{}
fatcat1337/dreamshaper-8-lcm-onnx
null
[ "onnx", "region:us" ]
null
2024-04-30T21:38:36+00:00
image-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-finetuned-flower This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.24.0 - Pytorch 2.2.1+cu121 - Datasets 2.7.1 - Tokenizers 0.13.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "model-index": [{"name": "vit-base-patch16-224-finetuned-flower", "results": []}]}
Vraj971/vit-base-patch16-224-finetuned-flower
null
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T21:39:05+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
abc88767/model24
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T21:40:52+00:00
null
diffusers
{}
Starry-Xin/gta-42000
null
[ "diffusers", "diffusers:Zero1to3StableDiffusionPipeline", "region:us" ]
null
2024-04-30T21:41:15+00:00
text-generation
transformers
# Model Card for Model ID Fine-Tuned version of Phi-3-mini ## Model Details ### Model Description QLORA-Fine Tuned version of Phi-3-mini-128k-instruct on the Alpaca dataset - **Developed by:** Microsoft, Fine-Tuning done by Yours Truly ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** https://huggingface.co/microsoft/Phi-3-mini-128k-instruct - **Paper :** https://aka.ms/phi3-tech-report
{"language": ["en"], "license": "mit", "library_name": "transformers", "datasets": ["yahma/alpaca-cleaned"], "pipeline_tag": "text-generation"}
MadElf1337/phi-3-mini-alpaca
null
[ "transformers", "safetensors", "text-generation", "en", "dataset:yahma/alpaca-cleaned", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-30T21:41:36+00:00
null
null
# DETAILS - Trained with 40 hours of raw data I can add more data in the future (Ariana grande, Dua lipa , Charlie Puth, Joji, Freddie Mercury, Michael Jackson) - Fine Tuned with ov2 pretrain - 32k still training 5 EPOCHS - 40k maybe I'll train it - 48k MAYBE I'll train it
{"license": "openrail"}
Sztef/SingerPreTrained
null
[ "license:openrail", "region:us" ]
null
2024-04-30T21:41:47+00:00
text-to-image
diffusers
# Perfect World 完美世界 v6 API Inference ![generated from modelslab.com](https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/generations/17433454521714513192.png) ## Get API Key Get API key from [ModelsLab API](http://modelslab.com), No Payment needed. Replace Key in below code, change **model_id** to "perfect-world-v6" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs) Try model for free: [Generate Images](https://modelslab.com/models/perfect-world-v6) Model link: [View model](https://modelslab.com/models/perfect-world-v6) View all models: [View Models](https://modelslab.com/models) import requests import json url = "https://modelslab.com/api/v6/images/text2img" payload = json.dumps({ "key": "your_api_key", "model_id": "perfect-world-v6", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
{"license": "creativeml-openrail-m", "tags": ["modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic"], "pinned": true}
stablediffusionapi/perfect-world-v6
null
[ "diffusers", "modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
null
2024-04-30T21:42:06+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
erbacher/TinyStories-10k-tokenizer
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-30T21:43:43+00:00
null
transformers
# jayrodge/Llama3-3-8B-Instruct-ft-loraAdap-Q4_K_M-GGUF This model was converted to GGUF format from [`patelmiteshn/Llama3-3-8B-Instruct-ft-loraAdap`](https://huggingface.co/patelmiteshn/Llama3-3-8B-Instruct-ft-loraAdap) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/patelmiteshn/Llama3-3-8B-Instruct-ft-loraAdap) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo jayrodge/Llama3-3-8B-Instruct-ft-loraAdap-Q4_K_M-GGUF --model llama3-3-8b-instruct-ft-loraadap.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo jayrodge/Llama3-3-8B-Instruct-ft-loraAdap-Q4_K_M-GGUF --model llama3-3-8b-instruct-ft-loraadap.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama3-3-8b-instruct-ft-loraadap.Q4_K_M.gguf -n 128 ```
{"library_name": "transformers", "tags": ["llama-cpp", "gguf-my-repo"]}
jayrodge/Llama3-3-8B-Instruct-ft-loraAdap-Q4_K_M-GGUF
null
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "endpoints_compatible", "region:us" ]
null
2024-04-30T21:45:10+00:00
text2text-generation
transformers
{}
lkid08/25k_only_tag_clean_01-05
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T21:46:23+00:00
text-generation
transformers
{}
Lumona/opinion-extractor-filtering-llama-3
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T21:48:45+00:00
image-segmentation
pytorch
![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/deeplabv3_plus_mobilenet/web-assets/model_demo.png) # DeepLabV3-Plus-MobileNet: Optimized for Mobile Deployment ## Deep Convolutional Neural Network model for semantic segmentation DeepLabV3 is designed for semantic segmentation at multiple scales, trained on the various datasets. It uses MobileNet as a backbone. This model is an implementation of DeepLabV3-Plus-MobileNet found [here](https://github.com/jfzhang95/pytorch-deeplab-xception). This repository provides scripts to run DeepLabV3-Plus-MobileNet on Qualcomm® devices. More details on model performance across various devices, can be found [here](https://aihub.qualcomm.com/models/deeplabv3_plus_mobilenet). ### Model Details - **Model Type:** Semantic segmentation - **Model Stats:** - Model checkpoint: VOC2012 - Input resolution: 513x513 - Number of parameters: 5.80M - Model size: 22.2 MB | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model | ---|---|---|---|---|---|---|---| | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 13.206 ms | 20 - 35 MB | FP16 | NPU | [DeepLabV3-Plus-MobileNet.tflite](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet/blob/main/DeepLabV3-Plus-MobileNet.tflite) | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 12.804 ms | 2 - 19 MB | FP16 | NPU | [DeepLabV3-Plus-MobileNet.so](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet/blob/main/DeepLabV3-Plus-MobileNet.so) ## Installation This model can be installed as a Python package via pip. ```bash pip install qai-hub-models ``` ## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`. With this API token, you can configure your client to run models on the cloud hosted devices. ```bash qai-hub configure --api_token API_TOKEN ``` Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information. ## Demo off target The package contains a simple end-to-end demo that downloads pre-trained weights and runs this model on a sample input. ```bash python -m qai_hub_models.models.deeplabv3_plus_mobilenet.demo ``` The above demo runs a reference implementation of pre-processing, model inference, and post processing. **NOTE**: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above). ``` %run -m qai_hub_models.models.deeplabv3_plus_mobilenet.demo ``` ### Run model on a cloud-hosted device In addition to the demo, you can also run the model on a cloud-hosted Qualcomm® device. This script does the following: * Performance check on-device on a cloud-hosted device * Downloads compiled assets that can be deployed on-device for Android. * Accuracy check between PyTorch and on-device outputs. ```bash python -m qai_hub_models.models.deeplabv3_plus_mobilenet.export ``` ``` Profile Job summary of DeepLabV3-Plus-MobileNet -------------------------------------------------- Device: QCS8550 (Proxy) (12) Estimated Inference Time: 13.24 ms Estimated Peak Memory Range: 21.14-23.32 MB Compute Units: NPU (98) | Total (98) Profile Job summary of DeepLabV3-Plus-MobileNet -------------------------------------------------- Device: QCS8550 (Proxy) (12) Estimated Inference Time: 12.99 ms Estimated Peak Memory Range: 3.05-25.23 MB Compute Units: NPU (124) | Total (124) ``` ## How does this work? This [export script](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/DeepLabV3-Plus-MobileNet/export.py) leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model on-device. Lets go through each step below in detail: Step 1: **Compile model for on-device deployment** To compile a PyTorch model for on-device deployment, we first trace the model in memory using the `jit.trace` and then call the `submit_compile_job` API. ```python import torch import qai_hub as hub from qai_hub_models.models.deeplabv3_plus_mobilenet import Model # Load the model torch_model = Model.from_pretrained() torch_model.eval() # Device device = hub.Device("Samsung Galaxy S23") # Trace model input_shape = torch_model.get_input_spec() sample_inputs = torch_model.sample_inputs() pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()]) # Compile model on a specific device compile_job = hub.submit_compile_job( model=pt_model, device=device, input_specs=torch_model.get_input_spec(), ) # Get target model to run on-device target_model = compile_job.get_target_model() ``` Step 2: **Performance profiling on cloud-hosted device** After compiling models from step 1. Models can be profiled model on-device using the `target_model`. Note that this scripts runs the model on a device automatically provisioned in the cloud. Once the job is submitted, you can navigate to a provided job URL to view a variety of on-device performance metrics. ```python profile_job = hub.submit_profile_job( model=target_model, device=device, ) ``` Step 3: **Verify on-device accuracy** To verify the accuracy of the model on-device, you can run on-device inference on sample input data on the same cloud hosted device. ```python input_data = torch_model.sample_inputs() inference_job = hub.submit_inference_job( model=target_model, device=device, inputs=input_data, ) on_device_output = inference_job.download_output_data() ``` With the output of the model, you can compute like PSNR, relative errors or spot check the output with expected output. **Note**: This on-device profiling and inference requires access to Qualcomm® AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup). ## Run demo on a cloud-hosted device You can also run the demo on-device. ```bash python -m qai_hub_models.models.deeplabv3_plus_mobilenet.demo --on-device ``` **NOTE**: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above). ``` %run -m qai_hub_models.models.deeplabv3_plus_mobilenet.demo -- --on-device ``` ## Deploying compiled model to Android The models can be deployed using multiple runtimes: - TensorFlow Lite (`.tflite` export): [This tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a guide to deploy the .tflite model in an Android application. - QNN (`.so` export ): This [sample app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html) provides instructions on how to use the `.so` shared library in an Android application. ## View on Qualcomm® AI Hub Get more details on DeepLabV3-Plus-MobileNet's performance across various devices [here](https://aihub.qualcomm.com/models/deeplabv3_plus_mobilenet). Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/) ## License - The license for the original implementation of DeepLabV3-Plus-MobileNet can be found [here](https://github.com/jfzhang95/pytorch-deeplab-xception/blob/master/LICENSE). - The license for the compiled assets for on-device deployment can be found [here]({deploy_license_url}) ## References * [Rethinking Atrous Convolution for Semantic Image Segmentation](https://arxiv.org/abs/1706.05587) * [Source Model Implementation](https://github.com/jfzhang95/pytorch-deeplab-xception) ## Community * Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:[email protected]).
{"license": "mit", "library_name": "pytorch", "tags": ["android"], "datasets": ["VOC2012"], "pipeline_tag": "image-segmentation"}
qualcomm/DeepLabV3-Plus-MobileNet
null
[ "pytorch", "tflite", "android", "image-segmentation", "dataset:VOC2012", "arxiv:1706.05587", "license:mit", "region:us" ]
null
2024-04-30T21:49:49+00:00
text-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
kssumanth6/IntentClassification_V3
null
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T21:50:19+00:00
null
null
{}
alexandro767/my_t5_base_for_hw2_title_retune
null
[ "region:us" ]
null
2024-04-30T21:51:45+00:00
null
null
{"license": "openrail"}
Hakari323/Nah
null
[ "license:openrail", "region:us" ]
null
2024-04-30T21:52:43+00:00
null
null
# LocAlM Compact yet powerful, LocAlM efficiently identifies the appropriate medical specialists based on your specific needs and preferences. (Less than 25 tokens per prompt!!!) <img src="https://cdn-uploads.huggingface.co/production/uploads/6630105676ea93b5c2b0ac1f/gIHJOFIv6TgS7MQwpMQp4.jpeg" width=400 /> > Alm is a small but smart duck, reminiscent in size of his parent model, phi3 # Versions LocAlM comes in 2 versions. I recommand you use the latest version: *2-localm-phi3-q5ks.gguf* You can load the model in Ollama if you want to run it locally. The model is very small and takes only 2.5G. In order to use *2-localm-phi3-q5ks.gguf*, follow this alpaca prompt format: ``` "instruction": For the symptoms given in input give me 1 or more doctors I should consult in french ordered by pertinence. "input": {{text}} ```
{}
potion-verte/LocAlM
null
[ "gguf", "region:us" ]
null
2024-04-30T21:53:13+00:00
null
null
{}
snakesss/jani
null
[ "region:us" ]
null
2024-04-30T21:53:30+00:00
text-generation
transformers
# Model Card for Model ID Phi3-mini-128k and phi3-mini-alpaca merged
{"library_name": "transformers", "tags": []}
MadElf1337/phi-3-mini-alpaca-merged
null
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T21:54:53+00:00
null
null
{}
Konark-HC/autotrain-n91is-yrclo
null
[ "region:us" ]
null
2024-04-30T21:58:03+00:00
null
null
# RVC Voice Model ## Overview This repository contains the RVC Voice Model, a robust machine learning model for voice synthesis. The model is designed to accurately replicate a specific voice, providing high-quality audio output suitable for various applications. ## File Structure - **Index File**: `logs/dotcom/added_IVF8_Flat_nprobe_1_dotcom_v2.index` - This index file is crucial for the operation of the model, facilitating efficient data retrieval. - **Model File**: `logs/dotcom/D_2333333.pth` - Contains the model parameters necessary for voice synthesis. - **Weights File**: `weights/dotcom.pth` - Stores the trained weights of the model, essential for generating the target voice. ## Usage The RVC Voice Model is fully unrestricted for any type of use, provided that proper credit is given to the creator. You are free to integrate, modify, and distribute this model in both personal and commercial projects. ## Credit If you use this model, please credit as follows: - **Creator of RVC model**: manikineko.nl ## Disclaimer The individual(voice source) on whom this voice model is based has been involved in activities considered highly illegal. The creation of this model is for educational and research purposes only, and it should not be used to glorify or endorse any illegal activities. ## License This project is licensed under the terms of the MIT License. ## Contact If you have any questions or need further information, please feel free to reach out via [email protected]. Thank you for using or contributing to the RVC Voice Model project!
{"license": "mit"}
bloomsirenix/dotcom_rvc
null
[ "tensorboard", "license:mit", "region:us" ]
null
2024-04-30T21:59:25+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
jiuhai/llama-3-1725
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T21:59:35+00:00
null
null
{}
anushkat/DistilGPT2-Beatles-model_Final
null
[ "region:us" ]
null
2024-04-30T22:00:17+00:00
text-generation
transformers
# Uploaded model - **Developed by:** katharsis - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
katharsis/llama3-8b-oig-unsloth-merged
null
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T22:01:06+00:00
text-classification
transformers
# Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.4699208438396454 f1_macro: 0.8648394526320947 f1_micro: 0.8277777777777777 f1_weighted: 0.827145991318232 precision_macro: 0.8595340501792115 precision_micro: 0.8277777777777777 precision_weighted: 0.8456027479091995 recall_macro: 0.8846808510638299 recall_micro: 0.8277777777777777 recall_weighted: 0.8277777777777777 accuracy: 0.8277777777777777
{"tags": ["autotrain", "text-classification"], "datasets": ["v11/autotrain-data"], "widget": [{"text": "I love AutoTrain"}]}
Zerithas/v11
null
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "autotrain", "dataset:v11/autotrain-data", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T22:01:39+00:00
text-generation
llama.cpp
# CodeGemma Model Page : [CodeGemma](https://ai.google.dev/gemma/docs/codegemma) Resources and Technical Documentation : [Technical Report](https://goo.gle/codegemma) : [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) Terms of Use : [Terms](https://ai.google.dev/gemma/terms) Authors : Google > [!IMPORTANT] > > In llama.cpp, and other related tools such as Ollama and LM Studio, please make sure that you have these flags set correctly, especially **`repeat-penalty`**. Georgi Gerganov (llama.cpp's author) shared his experience in https://huggingface.co/google/gemma-7b-it/discussions/38#65d7b14adb51f7c160769fa1. ## Description CodeGemma is a collection of lightweight open code models built on top of Gemma. CodeGemma models are text-to-text and text-to-code decoder-only models and are available as a 7 billion pretrained variant that specializes in code completion and code generation tasks, a 7 billion parameter instruction-tuned variant for code chat and instruction following and a 2 billion parameter pretrained variant for fast code completion. | | [ **codegemma-2b** ](https://huggingface.co/google/codegemma-1.1-2b-GGUF) | [codegemma-7b](https://huggingface.co/google/codegemma-7b-GGUF) | [codegemma-7b-it](https://huggingface.co/google/codegemma-1.1-7b-it-GGUF) | |----------------------------------|:----------------------------------------------------------------:|:----------------------------------------------------------:|:----------------------------------------------------------------:| | Code Completion | ✅ | ✅ | | | Generation from natural language | | ✅ | ✅ | | Chat | | | ✅ | | Instruction Following | | | ✅ | For detailed model card, refer to https://huggingface.co/google/codegemma-1.1-2b. ## Sample Usage ```shell $ cat non_prime /// Write a rust function to identify non-prime numbers. /// /// Examples: /// >>> is_not_prime(2) /// False /// >>> is_not_prime(10) /// True pub fn is_not_prime(n: i32) -> bool { $ main -m codegemma-1.1-2b.gguf --temp 0 --top-k 0 -f non_prime --log-disable --repeat-penalty 1.0 /// Write a rust function to identify non-prime numbers. /// /// Examples: /// >>> is_not_prime(2) /// False /// >>> is_not_prime(10) /// True pub fn is_not_prime(n: i32) -> bool { for i in 2..n { if n % i == 0 { return true; } } false } <|file_separator|> ``` ## Coding Benchmarks Benchmark | [2B](https://huggingface.co/google/codegemma-2b-GGUF) | [2B (1.1)](https://huggingface.co/google/codegemma-1.1-2b-GGUF) | [7B](https://huggingface.co/google/codegemma-7b-GGUF) | [7B-IT](https://huggingface.co/google/codegemma-7b-it-GGUF) | [7B-IT (1.1)](https://huggingface.co/google/codegemma-1.1-7b-it-GGUF) ----------------------|------|----------|------|-------|------------ HumanEval | 31.1 | 37.8 | 44.5 | 56.1 | 60.4 MBPP | 43.6 | 49.2 | 56.2 | 54.2 | 55.6 HumanEval Single Line | 78.4 | 79.3 | 76.1 | 68.3 | 77.4 HumanEval Multi Line | 51.4 | 51.0 | 58.4 | 20.1 | 23.7 BC HE C++ | 24.2 | 19.9 | 32.9 | 42.2 | 46.6 BC HE C# | 10.6 | 26.1 | 22.4 | 26.7 | 54.7 BC HE Go | 20.5 | 18.0 | 21.7 | 28.6 | 34.2 BC HE Java | 29.2 | 29.8 | 41.0 | 48.4 | 50.3 BC HE JavaScript | 21.7 | 28.0 | 39.8 | 46.0 | 48.4 BC HE Kotlin | 28.0 | 32.3 | 39.8 | 51.6 | 47.8 BC HE Python | 21.7 | 36.6 | 42.2 | 48.4 | 54.0 BC HE Rust | 26.7 | 24.2 | 34.1 | 36.0 | 37.3 BC MBPP C++ | 47.1 | 38.9 | 53.8 | 56.7 | 63.5 BC MBPP C# | 28.7 | 45.3 | 32.5 | 41.2 | 62.0 BC MBPP Go | 45.6 | 38.9 | 43.3 | 46.2 | 53.2 BC MBPP Java | 41.8 | 49.7 | 50.3 | 57.3 | 62.9 BC MBPP JavaScript | 45.3 | 45.0 | 58.2 | 61.4 | 61.4 BC MBPP Kotlin | 46.8 | 49.7 | 54.7 | 59.9 | 62.6 BC MBPP Python | 38.6 | 52.9 | 59.1 | 62.0 | 60.2 BC MBPP Rust | 45.3 | 47.4 | 52.9 | 53.5 | 52.3 ## Natural Language Benchmarks ![CodeGemma Natural Language Benchmarks](./codegemma_nl_benchmarks.png)
{"license": "gemma", "library_name": "llama.cpp", "extra_gated_heading": "Access CodeGemma on Hugging Face", "extra_gated_prompt": "To access Gemma on Hugging Face, you\u2019re required to review and agree to Google\u2019s usage license. To do this, please ensure you\u2019re logged-in to Hugging Face and click below. Requests are processed immediately.", "extra_gated_button_content": "Acknowledge license", "license_link": "https://ai.google.dev/gemma/terms", "pipeline_tag": "text-generation"}
google/codegemma-1.1-2b-GGUF
null
[ "llama.cpp", "gguf", "text-generation", "license:gemma", "region:us" ]
null
2024-04-30T22:01:59+00:00
text-generation
llama.cpp
# CodeGemma Model Page : [CodeGemma](https://ai.google.dev/gemma/docs/codegemma) Resources and Technical Documentation : [Technical Report](https://goo.gle/codegemma) : [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) Terms of Use : [Terms](https://ai.google.dev/gemma/terms) Authors : Google > [!IMPORTANT] > > In llama.cpp, and other related tools such as Ollama and LM Studio, please make sure that you have these flags set correctly, especially **`repeat-penalty`**. Georgi Gerganov (llama.cpp's author) shared his experience in https://huggingface.co/google/gemma-7b-it/discussions/38#65d7b14adb51f7c160769fa1. ## Description CodeGemma is a collection of lightweight open code models built on top of Gemma. CodeGemma models are text-to-text and text-to-code decoder-only models and are available as a 7 billion pretrained variant that specializes in code completion and code generation tasks, a 7 billion parameter instruction-tuned variant for code chat and instruction following and a 2 billion parameter pretrained variant for fast code completion. | | [ **codegemma-2b** ](https://huggingface.co/google/codegemma-1.1-2b-GGUF) | [codegemma-7b](https://huggingface.co/google/codegemma-7b-GGUF) | [codegemma-7b-it](https://huggingface.co/google/codegemma-1.1-7b-it-GGUF) | |----------------------------------|:----------------------------------------------------------------:|:----------------------------------------------------------:|:----------------------------------------------------------------:| | Code Completion | ✅ | ✅ | | | Generation from natural language | | ✅ | ✅ | | Chat | | | ✅ | | Instruction Following | | | ✅ | For detailed model card, refer to https://huggingface.co/google/codegemma-1.1-7b-it. ## Sample Usage ```shell $ cat non_prime /// Write a rust function to identify non-prime numbers. /// /// Examples: /// >>> is_not_prime(2) /// False /// >>> is_not_prime(10) /// True pub fn is_not_prime(n: i32) -> bool { $ main -m codegemma-1.1-7b-it.gguf --temp 0 --top-k 0 -f non_prime --log-disable --repeat-penalty 1.0 /// Write a rust function to identify non-prime numbers. /// /// Examples: /// >>> is_not_prime(2) /// False /// >>> is_not_prime(10) /// True pub fn is_not_prime(n: i32) -> bool { if n <= 1 { return true; } for i in 2..=(n as f64).sqrt() as i32 { if n % i == 0 { return true; } } false } ``` ## Coding Benchmarks Benchmark | [2B](https://huggingface.co/google/codegemma-2b-GGUF) | [2B (1.1)](https://huggingface.co/google/codegemma-1.1-2b-GGUF) | [7B](https://huggingface.co/google/codegemma-7b-GGUF) | [7B-IT](https://huggingface.co/google/codegemma-7b-it-GGUF) | [7B-IT (1.1)](https://huggingface.co/google/codegemma-1.1-7b-it-GGUF) ----------------------|------|----------|------|-------|------------ HumanEval | 31.1 | 37.8 | 44.5 | 56.1 | 60.4 MBPP | 43.6 | 49.2 | 56.2 | 54.2 | 55.6 HumanEval Single Line | 78.4 | 79.3 | 76.1 | 68.3 | 77.4 HumanEval Multi Line | 51.4 | 51.0 | 58.4 | 20.1 | 23.7 BC HE C++ | 24.2 | 19.9 | 32.9 | 42.2 | 46.6 BC HE C# | 10.6 | 26.1 | 22.4 | 26.7 | 54.7 BC HE Go | 20.5 | 18.0 | 21.7 | 28.6 | 34.2 BC HE Java | 29.2 | 29.8 | 41.0 | 48.4 | 50.3 BC HE JavaScript | 21.7 | 28.0 | 39.8 | 46.0 | 48.4 BC HE Kotlin | 28.0 | 32.3 | 39.8 | 51.6 | 47.8 BC HE Python | 21.7 | 36.6 | 42.2 | 48.4 | 54.0 BC HE Rust | 26.7 | 24.2 | 34.1 | 36.0 | 37.3 BC MBPP C++ | 47.1 | 38.9 | 53.8 | 56.7 | 63.5 BC MBPP C# | 28.7 | 45.3 | 32.5 | 41.2 | 62.0 BC MBPP Go | 45.6 | 38.9 | 43.3 | 46.2 | 53.2 BC MBPP Java | 41.8 | 49.7 | 50.3 | 57.3 | 62.9 BC MBPP JavaScript | 45.3 | 45.0 | 58.2 | 61.4 | 61.4 BC MBPP Kotlin | 46.8 | 49.7 | 54.7 | 59.9 | 62.6 BC MBPP Python | 38.6 | 52.9 | 59.1 | 62.0 | 60.2 BC MBPP Rust | 45.3 | 47.4 | 52.9 | 53.5 | 52.3 ## Natural Language Benchmarks ![CodeGemma Natural Language Benchmarks](./codegemma_nl_benchmarks.png)
{"license": "gemma", "library_name": "llama.cpp", "extra_gated_heading": "Access CodeGemma on Hugging Face", "extra_gated_prompt": "To access Gemma on Hugging Face, you\u2019re required to review and agree to Google\u2019s usage license. To do this, please ensure you\u2019re logged-in to Hugging Face and click below. Requests are processed immediately.", "extra_gated_button_content": "Acknowledge license", "license_link": "https://ai.google.dev/gemma/terms", "pipeline_tag": "text-generation"}
google/codegemma-1.1-7b-it-GGUF
null
[ "llama.cpp", "gguf", "text-generation", "license:gemma", "region:us" ]
null
2024-04-30T22:03:05+00:00
null
null
{}
strannik/LlamaForCausalLM
null
[ "region:us" ]
null
2024-04-30T22:03:21+00:00
null
null
{}
kuma-rtin/systems
null
[ "region:us" ]
null
2024-04-30T22:03:32+00:00
null
null
{}
Kennyqwp/markus2
null
[ "region:us" ]
null
2024-04-30T22:04:47+00:00
null
null
# cleatherbury/CatPPT-base-Q5_K_M-GGUF This model was converted to GGUF format from [`rishiraj/CatPPT-base`](https://huggingface.co/rishiraj/CatPPT-base) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/rishiraj/CatPPT-base) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo cleatherbury/CatPPT-base-Q5_K_M-GGUF --model catppt-base.Q5_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo cleatherbury/CatPPT-base-Q5_K_M-GGUF --model catppt-base.Q5_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m catppt-base.Q5_K_M.gguf -n 128 ```
{"license": "apache-2.0", "tags": ["merge", "llama-cpp", "gguf-my-repo"]}
cleatherbury/CatPPT-base-Q5_K_M-GGUF
null
[ "gguf", "merge", "llama-cpp", "gguf-my-repo", "license:apache-2.0", "region:us" ]
null
2024-04-30T22:05:19+00:00
reinforcement-learning
ml-agents
# **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: dhajnes/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
{"library_name": "ml-agents", "tags": ["SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget"]}
dhajnes/ppo-SnowballTarget
null
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
null
2024-04-30T22:05:51+00:00