modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-28 00:40:13
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
500 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-28 00:36:54
card
stringlengths
11
1.01M
tensorplex-labs/Sumo-T9-7B-v0.1
tensorplex-labs
2024-05-22T13:43:06Z
6
3
transformers
[ "transformers", "safetensors", "llama", "text-generation", "pretrained", "7B", "English", "base-model", "bittensor", "decentralized AI", "conversational", "en", "dataset:tiiuae/falcon-refinedweb", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-15T10:30:52Z
--- language: - en license: mit library_name: transformers tags: - pretrained - 7B - English - text-generation - base-model - bittensor - decentralized AI datasets: - tiiuae/falcon-refinedweb --- # Sumo-T9-7B-v0.1 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65a8a4c5539e211436ef5485/GUbZGoQs9FKUjXzfHifZ6.png) ### Tensorplex Labs Unveils Sumo-T9-7B: Beating Notable 7b Pretrained Models [Tensorplex Labs]((https://tensorplex.ai)) is proud to announce that its latest top-performing model on Bittensor Subnet 9, Sumo-T9-7B, has outperformed notable models such as TII Falcon 7B and Meta's Llama-2-7b-hf. This achievement highlights the potential of decentralized networks like Bittensor and underscores Tensorplex Labs' commitment to advancing open-source AI technologies. "Sumo" represents the family of models developed by Tensorplex, and "T9" designates the top-performing model specifically trained for Bittensor Subnet 9. Bittensor Subnet 9 serves a unique role within the Bittensor ecosystem by rewarding miners who produce pretrained foundational models on the Falcon Refined Web dataset. This subnet functions as a continuous benchmark, where miners are incentivized to achieve the best performance metrics using a model under the parameter limit. The competitive nature of Subnet 9 drives rapid advancements and refinements in large language model training. Since the parameter limit was upgraded to 7 billion on April 19, 2024, Tensorplex Labs has published the top-performing model, surpassing the performance of notable models such as Falcon 7B and Llama 2 7B within less than a month. ## Model Details ### Model Description - **Developed by:** [Tensorplex Labs](https://tensorplex.ai) - **Model type:** Pretrained Foundational Language Model - **Language(s) (NLP):** Primarily English - **License:** MIT - **Architecture**: Adopted Llama-style architecture with 6.9 billion parameters - **Training Data**: Trained on the tiiuae/falcon-refinedweb dataset - **Training Objective**: Causal Language Modeling (next token prediction) - **Original Model Repo**: [tensorplex-labs/pretraining-sn9-7B-1](https://huggingface.co/tensorplex-labs/pretraining-sn9-7B-1) Sumo-T9-7B-v0.1 features a larger vocabulary size (100k), compatible with the GPT-4 tokenizer, ensuring its versatility across various natural language processing tasks. ⛔ **This is a pretrained base model, which hasn't been aligned yet. Use with caution or finetune further on downstream tasks before deployment.** ### Model Sources - **Bittensor Subnet9 Leaderboard:** [https://huggingface.co/spaces/RaoFoundation/pretraining-leaderboard](https://huggingface.co/spaces/RaoFoundation/pretraining-leaderboard) - **Bittensor Subnet9 Repository:** [https://github.com/RaoFoundation/pretraining/tree/main](https://github.com/RaoFoundation/pretraining/tree/main) ## How to Get Started with the Model Use the code below to get started with the model. ```python from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model = "tensorplex-labs/Sumo-T9-7B-v0.1" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, ) sequences = pipeline( "What is Yokozuna?", max_length=256, do_sample=True, temperature=0.6, top_p=0.9, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, bos_token_id=tokenizer.bos_token_id, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` ## Training Details ### Training Data This model has been trained with [tiiuae/falcon-refinedweb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) dataset, and still ongoing continuously. ## Evaluation Sumo-T9-7B-v0.1 has outperformed notable models such as TII Falcon 7B, Meta's Llama-2-7b and Llama-1-7b in zero-shot performance, establishing itself as the leading model in aggregate across various evaluation tasks. Such benchmarks include ARC Challenge, GSM8K, HellaSwag, MMLU, TruthfulQA, and Winogrande. | | avg | arc_challenge | gsm8k | hellaswag | mmlu | truthfulqa_mc2 | winogrande | |:--------------------------------------|-----------:|----------------:|--------:|------------:|-------:|-----------------:|-------------:| | meta-llama/Meta-Llama-3-8B | 0.6009 | 0.5333 | 0.4913 | 0.7906 | 0.621 | 0.4392 | 0.7301 | | **tensorplex-labs/Sumo-T9-7B-v0.1** | **0.4769** | 0.4753 | 0.1031 | 0.7666 | 0.4426 | 0.3723 | 0.7017 | | meta-llama/Llama-2-7b-hf | 0.473 | 0.4625 | 0.1213 | 0.7597 | 0.4123 | 0.3896 | 0.693 | | huggyllama/llama-7b | 0.4386 | 0.4471 | 0.0849 | 0.7621 | 0.2973 | 0.3408 | 0.6993 | | tiiuae/falcon-7b | 0.4189 | 0.4343 | 0.0432 | 0.7636 | 0.2582 | 0.3428 | 0.6717 | ## Future Plans Tensorplex Labs will continue pushing the limits of what is possible on Subnet 9, and will also work on fine-tuning state-of-the-art models for Web3 domain-specific use-cases. One of the most ambitious projects is the development of a new data collection subnet. This will enable open and incentivized contributions of intelligence from a diverse pool of participants. The subnet will function as a collaborative platform where individuals can provide human preference or training data, which will be used to train, fine-tune, and evaluate AI models and miners across various subnets on Bittensor. ## About Tensorplex Labs Tensorplex Labs is an AI and Web3 startup that is building the decentralized AI of the future. The company’s mission is to decentralize AI, democratize access to data and intelligence, and build a more open, transparent, and equitable future for AI. Tensorplex Labs develops open-source capital and intelligence infrastructure and applications designed to grow decentralized AI, Web3, and crypto ecosystems by making them more capital efficient, intelligent, and trustworthy. The company is currently developing a novel way to better incentivize human input to train AI models, opening up more access to new pools of human contributors with new income opportunities. Founded in 2023 with headquarters in Singapore, Tensorplex Labs’ investors include Canonical Crypto, Collab+Currency, and Digital Currency Group among several others. For more information, visit [Tensorplex](https://tensorplex.ai). ## Model Card Authors - [email protected] ## Model Card Contact - [email protected]
brendanduke/Llama-2-7B-q4_0-pure.gguf
brendanduke
2024-05-22T13:43:06Z
36
0
null
[ "gguf", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-22T13:37:40Z
--- license: apache-2.0 ---
hgnoi/9QeCVLNmTBXdF6id
hgnoi
2024-05-22T13:42:56Z
127
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T13:41:23Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
colorfulniakoil/aaa
colorfulniakoil
2024-05-22T13:41:53Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-22T13:41:53Z
--- license: apache-2.0 ---
Niggendar/waiANINSFWPONYXL_v40
Niggendar
2024-05-22T13:41:44Z
138
3
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-05-22T13:34:18Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hgnoi/nucy4sYeLu78Uuy5
hgnoi
2024-05-22T13:41:15Z
125
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T13:39:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
vuongnhathien/test-seed-new
vuongnhathien
2024-05-22T13:40:33Z
195
0
transformers
[ "transformers", "tensorboard", "safetensors", "convnextv2", "image-classification", "generated_from_trainer", "base_model:facebook/convnextv2-nano-22k-384", "base_model:finetune:facebook/convnextv2-nano-22k-384", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-22T13:35:35Z
--- license: apache-2.0 base_model: facebook/convnextv2-nano-22k-384 tags: - image-classification - generated_from_trainer metrics: - accuracy model-index: - name: test-seed-new results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test-seed-new This model is a fine-tuned version of [facebook/convnextv2-nano-22k-384](https://huggingface.co/facebook/convnextv2-nano-22k-384) on the jbarat/plant_species dataset. It achieves the following results on the evaluation set: - Loss: 0.4140 - Accuracy: 0.875 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 64 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 10 | 0.5266 | 0.8 | | No log | 2.0 | 20 | 0.3692 | 0.85 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
hgnoi/eIxQ0ZDY7VrTK5yS
hgnoi
2024-05-22T13:40:28Z
126
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T13:38:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
moiz1/Mistral-7b-Instruct-v0.2-finetune-summerization-10k-system-prompt-style
moiz1
2024-05-22T13:38:03Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T12:19:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
matthieuzone/MOTHAISter
matthieuzone
2024-05-22T13:35:45Z
2
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-22T06:12:30Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of sks cheese widget: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - matthieuzone/MOTHAISter <Gallery /> ## Model description These are matthieuzone/MOTHAISter LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of sks cheese to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](matthieuzone/MOTHAISter/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
IR-Cocktail/bert-large-uncased-mean-v3-msmarco
IR-Cocktail
2024-05-22T13:35:35Z
2
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-05-22T07:58:15Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 16633 with parameters: ``` {'batch_size': 30, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 10000, "evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "correct_bias": false, "eps": 1e-06, "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
vuongnhathien/convnext-nano-15ep
vuongnhathien
2024-05-22T13:33:16Z
199
1
transformers
[ "transformers", "tensorboard", "safetensors", "convnextv2", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:facebook/convnextv2-nano-22k-384", "base_model:finetune:facebook/convnextv2-nano-22k-384", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-22T06:00:25Z
--- license: apache-2.0 base_model: facebook/convnextv2-nano-22k-384 tags: - image-classification - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: convnext-nano-15ep results: - task: name: Image Classification type: image-classification dataset: name: vuongnhathien/30VNFoods type: imagefolder config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.9081349206349206 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # convnext-nano-15ep This model is a fine-tuned version of [facebook/convnextv2-nano-22k-384](https://huggingface.co/facebook/convnextv2-nano-22k-384) on the vuongnhathien/30VNFoods dataset. It achieves the following results on the evaluation set: - Loss: 0.4761 - Accuracy: 0.9081 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 64 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5814 | 1.0 | 275 | 0.5148 | 0.8477 | | 0.2972 | 2.0 | 550 | 0.4967 | 0.8557 | | 0.1871 | 3.0 | 825 | 0.4887 | 0.8716 | | 0.1205 | 4.0 | 1100 | 0.5173 | 0.8688 | | 0.0732 | 5.0 | 1375 | 0.4979 | 0.8815 | | 0.0443 | 6.0 | 1650 | 0.5483 | 0.8815 | | 0.0392 | 7.0 | 1925 | 0.5512 | 0.8835 | | 0.018 | 8.0 | 2200 | 0.5102 | 0.8946 | | 0.0043 | 9.0 | 2475 | 0.5423 | 0.8954 | | 0.0087 | 10.0 | 2750 | 0.4903 | 0.9105 | | 0.0035 | 11.0 | 3025 | 0.4855 | 0.9082 | | 0.0022 | 12.0 | 3300 | 0.4874 | 0.9074 | | 0.0019 | 13.0 | 3575 | 0.4858 | 0.9082 | | 0.0018 | 14.0 | 3850 | 0.4857 | 0.9082 | | 0.0018 | 15.0 | 4125 | 0.4859 | 0.9082 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
ninagroot/Baby-Llama-58M-RUN3_5
ninagroot
2024-05-22T13:33:06Z
139
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-04-29T09:29:47Z
--- tags: - generated_from_trainer model-index: - name: Baby-Llama-58M-RUN3_5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Baby-Llama-58M-RUN3_5 This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 5.2656 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00025 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 50 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 287.9659 | 1.0 | 12 | 256.0041 | | 230.7873 | 2.0 | 24 | 212.6014 | | 207.1002 | 3.0 | 36 | 180.9384 | | 121.5561 | 4.0 | 48 | 107.3193 | | 81.2108 | 5.0 | 60 | 71.6529 | | 45.9781 | 6.0 | 72 | 40.4501 | | 24.5986 | 7.0 | 84 | 22.4212 | | 15.2205 | 8.0 | 96 | 13.7469 | | 10.1247 | 9.0 | 108 | 9.8119 | | 7.975 | 10.0 | 120 | 7.8583 | | 6.7087 | 11.0 | 132 | 7.0360 | | 6.1988 | 12.0 | 144 | 6.4104 | | 5.6752 | 13.0 | 156 | 6.1222 | | 5.5155 | 14.0 | 168 | 5.8179 | | 4.7754 | 15.0 | 180 | 5.5676 | | 4.816 | 16.0 | 192 | 5.4583 | | 4.817 | 17.0 | 204 | 5.3641 | | 4.6966 | 18.0 | 216 | 5.3147 | | 4.8322 | 19.0 | 228 | 5.2867 | | 4.4875 | 20.0 | 240 | 5.2656 | ### Framework versions - Transformers 4.39.1 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
matonier/bloomz-560-m-peft-method
matonier
2024-05-22T13:30:39Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-22T13:30:35Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
matthieuzone/PECORINOter
matthieuzone
2024-05-22T13:30:23Z
1
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-22T06:14:31Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of sks cheese widget: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - matthieuzone/PECORINOter <Gallery /> ## Model description These are matthieuzone/PECORINOter LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of sks cheese to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](matthieuzone/PECORINOter/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
matthieuzone/NEUFCHATELter
matthieuzone
2024-05-22T13:28:39Z
1
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-22T06:13:32Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of sks cheese widget: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - matthieuzone/NEUFCHATELter <Gallery /> ## Model description These are matthieuzone/NEUFCHATELter LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of sks cheese to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](matthieuzone/NEUFCHATELter/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
svdr/svdr-nq
svdr
2024-05-22T13:26:49Z
34
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-22T13:25:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
paulh27/iwslt_aligned_smallT5_cont0
paulh27
2024-05-22T13:21:30Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "mt5", "text2text-generation", "generated_from_trainer", "de", "en", "dataset:paulh27/alignment_iwslt2017_de_en", "base_model:google/mt5-small", "base_model:finetune:google/mt5-small", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-06T04:21:13Z
--- language: - de - en license: apache-2.0 base_model: google/mt5-small tags: - generated_from_trainer datasets: - paulh27/alignment_iwslt2017_de_en metrics: - bleu model-index: - name: iwslt_aligned_smallT5_cont0 results: - task: name: Translation type: translation dataset: name: paulh27/alignment_iwslt2017_de_en type: paulh27/alignment_iwslt2017_de_en metrics: - name: Bleu type: bleu value: 65.6358 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # iwslt_aligned_smallT5_cont0 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the paulh27/alignment_iwslt2017_de_en dataset. It achieves the following results on the evaluation set: - Loss: 0.5612 - Bleu: 65.6358 - Gen Len: 28.7691 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adafactor - lr_scheduler_type: constant - training_steps: 500000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:| | 1.2426 | 0.78 | 10000 | 0.8300 | 46.2793 | 28.6532 | | 0.9931 | 1.55 | 20000 | 0.6756 | 52.2709 | 28.6441 | | 0.8573 | 2.33 | 30000 | 0.6143 | 55.8294 | 28.5405 | | 0.762 | 3.11 | 40000 | 0.5811 | 57.5135 | 28.366 | | 0.734 | 3.88 | 50000 | 0.5499 | 58.6125 | 28.5101 | | 0.6722 | 4.66 | 60000 | 0.5228 | 59.6427 | 28.8356 | | 0.6215 | 5.43 | 70000 | 0.5161 | 60.4701 | 28.7534 | | 0.5756 | 6.21 | 80000 | 0.5068 | 62.0864 | 28.6498 | | 0.5738 | 6.99 | 90000 | 0.5005 | 61.9714 | 28.5788 | | 0.5384 | 7.76 | 100000 | 0.4909 | 62.407 | 28.5282 | | 0.5109 | 8.54 | 110000 | 0.4902 | 62.1452 | 28.4617 | | 0.4816 | 9.32 | 120000 | 0.4875 | 62.6499 | 28.5518 | | 0.4493 | 10.09 | 130000 | 0.4867 | 62.6694 | 28.6993 | | 0.4648 | 10.87 | 140000 | 0.4775 | 63.3179 | 28.5495 | | 0.4414 | 11.64 | 150000 | 0.4787 | 63.6928 | 28.4673 | | 0.4158 | 12.42 | 160000 | 0.4792 | 63.8752 | 28.5011 | | 0.3895 | 13.2 | 170000 | 0.4794 | 63.8429 | 28.6498 | | 0.4031 | 13.97 | 180000 | 0.4757 | 63.9496 | 28.7264 | | 0.3844 | 14.75 | 190000 | 0.4855 | 63.7498 | 28.8288 | | 0.3637 | 15.53 | 200000 | 0.4800 | 64.2277 | 28.661 | | 0.3473 | 16.3 | 210000 | 0.4854 | 64.4683 | 28.786 | | 0.3243 | 17.08 | 220000 | 0.4903 | 64.7805 | 28.6791 | | 0.3426 | 17.85 | 230000 | 0.4819 | 64.679 | 28.4809 | | 0.3295 | 18.63 | 240000 | 0.4852 | 65.3735 | 28.6014 | | 0.3124 | 19.41 | 250000 | 0.4947 | 64.5641 | 28.6745 | | 0.2933 | 20.18 | 260000 | 0.4972 | 65.1364 | 28.6419 | | 0.3101 | 20.96 | 270000 | 0.4902 | 64.6747 | 28.6802 | | 0.2991 | 21.74 | 280000 | 0.4907 | 64.9732 | 28.5653 | | 0.2828 | 22.51 | 290000 | 0.5038 | 64.7552 | 28.6261 | | 0.2688 | 23.29 | 300000 | 0.5042 | 65.0702 | 28.7534 | | 0.2555 | 24.06 | 310000 | 0.5101 | 65.0378 | 29.089 | | 0.2692 | 24.84 | 320000 | 0.5022 | 64.9991 | 28.6937 | | 0.2593 | 25.62 | 330000 | 0.5085 | 65.2478 | 28.6137 | | 0.2439 | 26.39 | 340000 | 0.5152 | 64.863 | 28.6464 | | 0.2327 | 27.17 | 350000 | 0.5165 | 65.0748 | 28.7286 | | 0.249 | 27.95 | 360000 | 0.5116 | 64.7249 | 28.6137 | | 0.238 | 28.72 | 370000 | 0.5202 | 64.7651 | 28.5968 | | 0.2297 | 29.5 | 380000 | 0.5243 | 65.3334 | 28.7005 | | 0.2152 | 30.27 | 390000 | 0.5336 | 64.9364 | 28.6081 | | 0.2106 | 31.05 | 400000 | 0.5408 | 65.117 | 28.6745 | | 0.2234 | 31.83 | 410000 | 0.5249 | 64.8926 | 28.6318 | | 0.2085 | 32.6 | 420000 | 0.5306 | 65.5715 | 28.7984 | | 0.2018 | 33.38 | 430000 | 0.5429 | 64.9154 | 28.6351 | | 0.1885 | 34.16 | 440000 | 0.5453 | 65.0538 | 28.8525 | | 0.2049 | 34.93 | 450000 | 0.5434 | 65.2857 | 28.7207 | | 0.1957 | 35.71 | 460000 | 0.5491 | 65.3436 | 28.714 | | 0.1867 | 36.49 | 470000 | 0.5536 | 65.4934 | 28.7939 | | 0.1765 | 37.26 | 480000 | 0.5583 | 65.5595 | 28.8255 | | 0.1786 | 38.04 | 490000 | 0.5612 | 65.6358 | 28.7691 | | 0.1809 | 38.81 | 500000 | 0.5573 | 65.0266 | 28.7455 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
harveybro/molt5-augmented-default-300-small-caption2smiles
harveybro
2024-05-22T13:19:25Z
110
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-21T08:02:10Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ayoubkirouane/llava-phi3-instruct-Lora
ayoubkirouane
2024-05-22T13:16:23Z
3
0
peft
[ "peft", "safetensors", "image-text-to-text", "conversational", "en", "dataset:ayoubkirouane/llava-instruct-small", "region:us" ]
image-text-to-text
2024-05-22T12:59:44Z
--- datasets: - ayoubkirouane/llava-instruct-small library_name: peft pipeline_tag: image-text-to-text language: - en --- ## Base model : - xtuner/llava-phi-3-mini-hf ## Dataset : - ayoubkirouane/llava-instruct-small ## Get started : ```python from transformers import AutoModelForCausalLM from peft import PeftModel base_model = AutoModelForCausalLM.from_pretrained("xtuner/llava-phi-3-mini-hf") peft_model_id = "ayoubkirouane/llava-phi3-instruct-Lora" model = PeftModel.from_pretrained(base_model, peft_model_id) ```
namratanwani/summarise-userquery-healthcare
namratanwani
2024-05-22T13:14:09Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "llama 3", "8B", "en", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-22T12:05:56Z
--- language: - en library_name: transformers tags: - unsloth - llama 3 - 8B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ZaneHorrible/n_rmsProp_VitB-p32-384-2e-4-batch_16_epoch_4_classes_24
ZaneHorrible
2024-05-22T13:12:03Z
217
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch32-384", "base_model:finetune:google/vit-base-patch32-384", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-22T11:46:25Z
--- license: apache-2.0 base_model: google/vit-base-patch32-384 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: n_rmsProp_VitB-p32-384-2e-4-batch_16_epoch_4_classes_24 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9597701149425287 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # n_rmsProp_VitB-p32-384-2e-4-batch_16_epoch_4_classes_24 This model is a fine-tuned version of [google/vit-base-patch32-384](https://huggingface.co/google/vit-base-patch32-384) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1776 - Accuracy: 0.9598 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.8953 | 0.07 | 100 | 3.3433 | 0.1164 | | 1.8876 | 0.14 | 200 | 2.0956 | 0.3333 | | 0.7962 | 0.21 | 300 | 0.9204 | 0.7040 | | 0.5319 | 0.28 | 400 | 0.5776 | 0.8118 | | 0.3414 | 0.35 | 500 | 0.3952 | 0.8764 | | 0.1779 | 0.42 | 600 | 0.2754 | 0.9109 | | 0.2608 | 0.49 | 700 | 0.4758 | 0.8649 | | 0.2218 | 0.56 | 800 | 0.2755 | 0.9152 | | 0.1441 | 0.63 | 900 | 0.2786 | 0.9138 | | 0.1809 | 0.7 | 1000 | 0.3369 | 0.8894 | | 0.1212 | 0.77 | 1100 | 0.2293 | 0.9224 | | 0.1966 | 0.84 | 1200 | 0.1879 | 0.9468 | | 0.1587 | 0.91 | 1300 | 0.2081 | 0.9468 | | 0.123 | 0.97 | 1400 | 0.2061 | 0.9368 | | 0.1052 | 1.04 | 1500 | 0.2915 | 0.9181 | | 0.0701 | 1.11 | 1600 | 0.3753 | 0.9109 | | 0.0601 | 1.18 | 1700 | 0.2034 | 0.9382 | | 0.0911 | 1.25 | 1800 | 0.1898 | 0.9382 | | 0.022 | 1.32 | 1900 | 0.2885 | 0.9224 | | 0.0805 | 1.39 | 2000 | 0.2636 | 0.9310 | | 0.0024 | 1.46 | 2100 | 0.2271 | 0.9368 | | 0.0056 | 1.53 | 2200 | 0.1677 | 0.9555 | | 0.0789 | 1.6 | 2300 | 0.2369 | 0.9325 | | 0.0935 | 1.67 | 2400 | 0.2417 | 0.9353 | | 0.0499 | 1.74 | 2500 | 0.1791 | 0.9540 | | 0.0375 | 1.81 | 2600 | 0.2283 | 0.9411 | | 0.0166 | 1.88 | 2700 | 0.2564 | 0.9468 | | 0.0166 | 1.95 | 2800 | 0.2737 | 0.9267 | | 0.0033 | 2.02 | 2900 | 0.2508 | 0.9425 | | 0.0144 | 2.09 | 3000 | 0.1975 | 0.9483 | | 0.1054 | 2.16 | 3100 | 0.2073 | 0.9425 | | 0.0004 | 2.23 | 3200 | 0.1479 | 0.9598 | | 0.0288 | 2.3 | 3300 | 0.2287 | 0.9526 | | 0.0066 | 2.37 | 3400 | 0.2602 | 0.9411 | | 0.001 | 2.44 | 3500 | 0.2220 | 0.9468 | | 0.0233 | 2.51 | 3600 | 0.2505 | 0.9382 | | 0.0205 | 2.58 | 3700 | 0.1830 | 0.9583 | | 0.0083 | 2.65 | 3800 | 0.2539 | 0.9368 | | 0.0003 | 2.72 | 3900 | 0.2439 | 0.9440 | | 0.0003 | 2.79 | 4000 | 0.2040 | 0.9555 | | 0.019 | 2.86 | 4100 | 0.2246 | 0.9598 | | 0.0069 | 2.92 | 4200 | 0.2520 | 0.9526 | | 0.0003 | 2.99 | 4300 | 0.1937 | 0.9555 | | 0.0001 | 3.06 | 4400 | 0.2040 | 0.9511 | | 0.0004 | 3.13 | 4500 | 0.1777 | 0.9598 | | 0.0005 | 3.2 | 4600 | 0.1956 | 0.9626 | | 0.0001 | 3.27 | 4700 | 0.2120 | 0.9569 | | 0.0001 | 3.34 | 4800 | 0.1936 | 0.9612 | | 0.0001 | 3.41 | 4900 | 0.2002 | 0.9583 | | 0.0002 | 3.48 | 5000 | 0.1795 | 0.9598 | | 0.0001 | 3.55 | 5100 | 0.1548 | 0.9655 | | 0.0006 | 3.62 | 5200 | 0.1931 | 0.9555 | | 0.0001 | 3.69 | 5300 | 0.1846 | 0.9598 | | 0.0 | 3.76 | 5400 | 0.2092 | 0.9526 | | 0.0 | 3.83 | 5500 | 0.1927 | 0.9555 | | 0.0 | 3.9 | 5600 | 0.1796 | 0.9555 | | 0.0 | 3.97 | 5700 | 0.1776 | 0.9598 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
giantdev/dippy-soDBy-sn11m4
giantdev
2024-05-22T13:08:20Z
126
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T13:06:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Likich/falcon-finetune-qualcoding_500_prompt1
Likich
2024-05-22T13:06:58Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-22T13:06:53Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
HaikuEU/mixtral-fine-tuned-vanilla
HaikuEU
2024-05-22T13:02:54Z
4
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-22T11:59:10Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Cilix0/ppo-LunarLander-v2
Cilix0
2024-05-22T13:02:46Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-05-22T13:02:25Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 265.11 +/- 13.06 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
vessl/llama-3-8b-bnb-4bit-dpo-qlora
vessl
2024-05-22T12:59:44Z
80
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "dpo", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:quantized:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-22T12:55:41Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - dpo base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** vessl - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
ludekcizinsky/phi3-dpo-align
ludekcizinsky
2024-05-22T12:58:38Z
76
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "en", "base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "base_model:quantized:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-22T12:56:40Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit --- # Uploaded model - **Developed by:** ludekcizinsky - **License:** apache-2.0 - **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
DL-Project/hatespeech_distilbert
DL-Project
2024-05-22T12:55:17Z
108
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-13T17:41:14Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy - recall - precision - f1 model-index: - name: hatespeech_distilbert results: [] widget: - text: "Democrats using African-Americans again." example_title: "Non-Hate Speech Example" - text: "Holy fuck this girl's trash, what a cunt." example_title: "Hate Speech Example" --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hatespeech_distilbert This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9977 - Accuracy: 0.7737 - Recall: 0.8118 - Precision: 0.7526 - F1: 0.7811 And the following results on the test set: - Loss: 1.0640 - Accuracy: 0.7544 - Recall: 0.7930 - Precision: 0.7406 - F1: 0.7659 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | Precision | F1 | |:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.4863 | 0.9935 | 77 | 0.4678 | 0.7701 | 0.7421 | 0.7841 | 0.7625 | | 0.3935 | 2.0 | 155 | 0.4595 | 0.7834 | 0.7340 | 0.8124 | 0.7712 | | 0.2792 | 2.9935 | 232 | 0.5285 | 0.7850 | 0.7291 | 0.8188 | 0.7713 | | 0.1408 | 4.0 | 310 | 0.7130 | 0.7785 | 0.7940 | 0.7684 | 0.7810 | | 0.0945 | 4.9935 | 387 | 0.8230 | 0.7806 | 0.7551 | 0.7937 | 0.7739 | | 0.0541 | 6.0 | 465 | 0.9977 | 0.7737 | 0.8118 | 0.7526 | 0.7811 | | 0.0331 | 6.9935 | 542 | 1.1107 | 0.7753 | 0.7859 | 0.7678 | 0.7768 | | 0.0151 | 8.0 | 620 | 1.1703 | 0.7789 | 0.7543 | 0.7915 | 0.7724 | | 0.0106 | 8.9935 | 697 | 1.2741 | 0.7785 | 0.7616 | 0.7864 | 0.7738 | | 0.0051 | 9.9355 | 770 | 1.2964 | 0.7753 | 0.7851 | 0.7683 | 0.7766 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
IR-Cocktail/bert-small-mean-v3-msmarco
IR-Cocktail
2024-05-22T12:54:59Z
4
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-05-22T07:57:58Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 6653 with parameters: ``` {'batch_size': 75, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 10000, "evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "correct_bias": false, "eps": 1e-06, "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 512, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
arpsad18/ami-v2
arpsad18
2024-05-22T12:50:03Z
1
1
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:GraydientPlatformAPI/autism-pony", "base_model:adapter:GraydientPlatformAPI/autism-pony", "license:unknown", "region:us" ]
text-to-image
2024-05-22T12:48:14Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: aimee, middle aged woman output: url: images/00062-4142654312.jpeg base_model: GraydientPlatformAPI/autism-pony instance_prompt: aimee license: unknown --- # amiv3 <Gallery /> ## Model description pony v2 of ami ## Trigger words You should use `aimee` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/arpsad18/ami-v2/tree/main) them in the Files & versions tab.
giantdev/dippy-gh5T7-sn11m1
giantdev
2024-05-22T12:48:15Z
126
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T12:46:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
betteib/tunisian-data-tokenizer-unigram-v2
betteib
2024-05-22T12:46:51Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-22T12:46:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
YC-Li/Sequence-to-Sequence-ASR-Error-Correction
YC-Li
2024-05-22T12:45:50Z
0
1
null
[ "ASR", "Error Correction", "Crossmodal", "en", "region:us" ]
null
2024-05-22T00:34:00Z
--- language: - en metrics: - wer - bleu - google_bleu tags: - ASR - Error Correction - Crossmodal --- ### Model Description Pre-Training Settings: 166k samples from Common Voice 13.0 was recognized by Whisper tiny.en. 1,000 random samples was selected as the test set, and the rest for training and validation with an 80%-20% split - Batch size: 256 - Initial learning rate: 1e-5 - Adam optimizer - 30 epochs - Cross-entropy loss - Best checkpoint saved based on WER as the evaluation metric - Decoding is performed using beam search with a size of 5 - S2S backbone model adopted from ''[Exploring data augmentation for code generation tasks](https://aclanthology.org/2023.findings-eacl.114/)''. Continue-Training Setting: - 2 epochs for gold-gold to prevent the over-correction problem on ''[Ted talk data](https://cris.fbk.eu/bitstream/11582/104409/1/WIT3-EAMT2012.pdf)''
steve1989/Internlm7b-fingpt-sentimentV2
steve1989
2024-05-22T12:45:30Z
5
0
transformers
[ "transformers", "safetensors", "internlm2", "feature-extraction", "custom_code", "arxiv:1910.09700", "region:us" ]
feature-extraction
2024-05-22T12:29:03Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nold/Phi-3-mini-4k-instruct-function-calling-GGUF
nold
2024-05-22T12:44:59Z
77
6
null
[ "gguf", "dataset:mzbac/function-calling-phi-3-format-v1.1", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-21T18:16:48Z
--- datasets: - mzbac/function-calling-phi-3-format-v1.1 --- # Model Fine-tuned the Phi3 instruction model for function calling via MLX-LM using https://huggingface.co/datasets/mzbac/function-calling-phi-3-format-v1.1 # Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "mzbac/Phi-3-mini-4k-instruct-function-calling" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) tool = { "name": "search_web", "description": "Perform a web search for a given search terms.", "parameter": { "type": "object", "properties": { "search_terms": { "type": "array", "items": {"type": "string"}, "description": "The search queries for which the search is performed.", "required": True, } }, }, } messages = [ { "role": "user", "content": f"You are a helpful assistant with access to the following functions. Use them if required - {str(tool)}", }, {"role": "user", "content": "Any news in Melbourne today, May 7, 2024?"}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|end|>")] outputs = model.generate( input_ids, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.1, ) response = outputs[0] print(tokenizer.decode(response)) # <s><|user|> You are a helpful assistant with access to the following functions. Use them if required - {'name': 'search_web', 'description': 'Perform a web search for a given search terms.', 'parameter': {'type': 'object', 'properties': {'search_terms': {'type': 'array', 'items': {'type': 'string'}, 'description': 'The search queries for which the search is performed.', 'required': True}}}}<|end|><|assistant|> # <|user|> Any news in Melbourne today, May 7, 2024?<|end|> # <|assistant|> <functioncall> {"name": "search_web", "arguments": {"search_terms": ["news", "Melbourne", "May 7, 2024"]}}<|end|> ``` # Training hyperparameters lora_config.yaml ```yaml # The path to the local model directory or Hugging Face repo. model: "microsoft/Phi-3-mini-4k-instruct" # Whether or not to train (boolean) train: true # Directory with {train, valid, test}.jsonl files data: "data" # The PRNG seed seed: 0 # Number of layers to fine-tune lora_layers: 32 # Minibatch size. batch_size: 1 # Iterations to train for. iters: 111000 # Number of validation batches, -1 uses the entire validation set. val_batches: -1 # Adam learning rate. learning_rate: 1e-6 # Number of training steps between loss reporting. steps_per_report: 10 # Number of training steps between validations. steps_per_eval: 200 # Load path to resume training with the given adapter weights. # resume_adapter_file: "adapters/adapters.safetensors" # Save/load path for the trained adapter weights. adapter_path: "adapters" # Save the model every N iterations. save_every: 1000 # Evaluate on the test set after training test: false # Number of test set batches, -1 uses the entire test set. test_batches: 100 # Maximum sequence length. max_seq_length: 4096 # Use gradient checkpointing to reduce memory use. grad_checkpoint: false # LoRA parameters can only be specified in a config file lora_parameters: # The layer keys to apply LoRA to. # These will be applied for the last lora_layers keys: ['mlp.down_proj','mlp.gate_up_proj','self_attn.qkv_proj','self_attn.o_proj'] rank: 128 alpha: 256 scale: 10.0 dropout: 0.05 ``` *** Quantization of Model [mzbac/Phi-3-mini-4k-instruct-function-calling](https://huggingface.co/mzbac/Phi-3-mini-4k-instruct-function-calling). Created using [llm-quantizer](https://github.com/Nold360/llm-quantizer) Pipeline
BilalMuftuoglu/deit-base-distilled-patch16-224-85-fold4
BilalMuftuoglu
2024-05-22T12:44:42Z
17
0
transformers
[ "transformers", "tensorboard", "safetensors", "deit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:facebook/deit-base-distilled-patch16-224", "base_model:finetune:facebook/deit-base-distilled-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-22T12:25:57Z
--- license: apache-2.0 base_model: facebook/deit-base-distilled-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: deit-base-distilled-patch16-224-85-fold4 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9318181818181818 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deit-base-distilled-patch16-224-85-fold4 This model is a fine-tuned version of [facebook/deit-base-distilled-patch16-224](https://huggingface.co/facebook/deit-base-distilled-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.2141 - Accuracy: 0.9318 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 2 | 0.6100 | 0.7273 | | No log | 2.0 | 4 | 0.6938 | 0.7045 | | No log | 3.0 | 6 | 0.7568 | 0.7045 | | No log | 4.0 | 8 | 0.6140 | 0.7045 | | 0.5388 | 5.0 | 10 | 0.4976 | 0.75 | | 0.5388 | 6.0 | 12 | 0.4809 | 0.7273 | | 0.5388 | 7.0 | 14 | 0.5276 | 0.7273 | | 0.5388 | 8.0 | 16 | 0.4455 | 0.7955 | | 0.5388 | 9.0 | 18 | 0.3915 | 0.8409 | | 0.4154 | 10.0 | 20 | 0.5070 | 0.7955 | | 0.4154 | 11.0 | 22 | 0.3747 | 0.8182 | | 0.4154 | 12.0 | 24 | 0.3027 | 0.8864 | | 0.4154 | 13.0 | 26 | 0.3053 | 0.8636 | | 0.4154 | 14.0 | 28 | 0.3194 | 0.8409 | | 0.3258 | 15.0 | 30 | 0.3134 | 0.8864 | | 0.3258 | 16.0 | 32 | 0.2925 | 0.8864 | | 0.3258 | 17.0 | 34 | 0.2449 | 0.8864 | | 0.3258 | 18.0 | 36 | 0.2308 | 0.8864 | | 0.3258 | 19.0 | 38 | 0.2141 | 0.9318 | | 0.2528 | 20.0 | 40 | 0.2330 | 0.9318 | | 0.2528 | 21.0 | 42 | 0.2173 | 0.9318 | | 0.2528 | 22.0 | 44 | 0.2450 | 0.9091 | | 0.2528 | 23.0 | 46 | 0.2549 | 0.9091 | | 0.2528 | 24.0 | 48 | 0.4341 | 0.75 | | 0.175 | 25.0 | 50 | 0.2358 | 0.9091 | | 0.175 | 26.0 | 52 | 0.2828 | 0.8864 | | 0.175 | 27.0 | 54 | 0.2236 | 0.9091 | | 0.175 | 28.0 | 56 | 0.2591 | 0.8636 | | 0.175 | 29.0 | 58 | 0.2702 | 0.8864 | | 0.169 | 30.0 | 60 | 0.2910 | 0.8636 | | 0.169 | 31.0 | 62 | 0.3594 | 0.9091 | | 0.169 | 32.0 | 64 | 0.4246 | 0.8864 | | 0.169 | 33.0 | 66 | 0.2655 | 0.8864 | | 0.169 | 34.0 | 68 | 0.2581 | 0.8864 | | 0.1336 | 35.0 | 70 | 0.2494 | 0.8409 | | 0.1336 | 36.0 | 72 | 0.2438 | 0.8636 | | 0.1336 | 37.0 | 74 | 0.3246 | 0.8636 | | 0.1336 | 38.0 | 76 | 0.2887 | 0.8409 | | 0.1336 | 39.0 | 78 | 0.3559 | 0.8409 | | 0.1281 | 40.0 | 80 | 0.3274 | 0.8864 | | 0.1281 | 41.0 | 82 | 0.3371 | 0.8409 | | 0.1281 | 42.0 | 84 | 0.3902 | 0.8409 | | 0.1281 | 43.0 | 86 | 0.3100 | 0.8409 | | 0.1281 | 44.0 | 88 | 0.3113 | 0.8636 | | 0.136 | 45.0 | 90 | 0.3244 | 0.8409 | | 0.136 | 46.0 | 92 | 0.3765 | 0.8864 | | 0.136 | 47.0 | 94 | 0.3838 | 0.8864 | | 0.136 | 48.0 | 96 | 0.3845 | 0.7955 | | 0.136 | 49.0 | 98 | 0.3910 | 0.7955 | | 0.0934 | 50.0 | 100 | 0.4889 | 0.8636 | | 0.0934 | 51.0 | 102 | 0.6680 | 0.8182 | | 0.0934 | 52.0 | 104 | 0.4264 | 0.8864 | | 0.0934 | 53.0 | 106 | 0.3266 | 0.8182 | | 0.0934 | 54.0 | 108 | 0.3168 | 0.8864 | | 0.0999 | 55.0 | 110 | 0.3671 | 0.8182 | | 0.0999 | 56.0 | 112 | 0.4684 | 0.8182 | | 0.0999 | 57.0 | 114 | 0.4254 | 0.8182 | | 0.0999 | 58.0 | 116 | 0.3195 | 0.8182 | | 0.0999 | 59.0 | 118 | 0.3860 | 0.8864 | | 0.1145 | 60.0 | 120 | 0.4805 | 0.8636 | | 0.1145 | 61.0 | 122 | 0.3864 | 0.8182 | | 0.1145 | 62.0 | 124 | 0.3347 | 0.8182 | | 0.1145 | 63.0 | 126 | 0.3144 | 0.8182 | | 0.1145 | 64.0 | 128 | 0.3267 | 0.8636 | | 0.0769 | 65.0 | 130 | 0.3592 | 0.8636 | | 0.0769 | 66.0 | 132 | 0.3520 | 0.8636 | | 0.0769 | 67.0 | 134 | 0.3632 | 0.8636 | | 0.0769 | 68.0 | 136 | 0.3955 | 0.8636 | | 0.0769 | 69.0 | 138 | 0.4053 | 0.8182 | | 0.0976 | 70.0 | 140 | 0.4272 | 0.8636 | | 0.0976 | 71.0 | 142 | 0.4345 | 0.8409 | | 0.0976 | 72.0 | 144 | 0.3943 | 0.8636 | | 0.0976 | 73.0 | 146 | 0.3827 | 0.8636 | | 0.0976 | 74.0 | 148 | 0.4133 | 0.8409 | | 0.0981 | 75.0 | 150 | 0.4311 | 0.8409 | | 0.0981 | 76.0 | 152 | 0.4126 | 0.8409 | | 0.0981 | 77.0 | 154 | 0.3651 | 0.8636 | | 0.0981 | 78.0 | 156 | 0.3511 | 0.8182 | | 0.0981 | 79.0 | 158 | 0.3625 | 0.8636 | | 0.085 | 80.0 | 160 | 0.3607 | 0.8636 | | 0.085 | 81.0 | 162 | 0.3470 | 0.8409 | | 0.085 | 82.0 | 164 | 0.3639 | 0.8409 | | 0.085 | 83.0 | 166 | 0.3750 | 0.8409 | | 0.085 | 84.0 | 168 | 0.3726 | 0.7955 | | 0.0831 | 85.0 | 170 | 0.3740 | 0.8182 | | 0.0831 | 86.0 | 172 | 0.3807 | 0.8636 | | 0.0831 | 87.0 | 174 | 0.3875 | 0.8636 | | 0.0831 | 88.0 | 176 | 0.3886 | 0.8409 | | 0.0831 | 89.0 | 178 | 0.4017 | 0.7955 | | 0.0811 | 90.0 | 180 | 0.4271 | 0.7955 | | 0.0811 | 91.0 | 182 | 0.4293 | 0.7955 | | 0.0811 | 92.0 | 184 | 0.4243 | 0.7727 | | 0.0811 | 93.0 | 186 | 0.4088 | 0.7727 | | 0.0811 | 94.0 | 188 | 0.3986 | 0.7955 | | 0.0692 | 95.0 | 190 | 0.3963 | 0.8182 | | 0.0692 | 96.0 | 192 | 0.3987 | 0.8636 | | 0.0692 | 97.0 | 194 | 0.4020 | 0.8636 | | 0.0692 | 98.0 | 196 | 0.4015 | 0.8636 | | 0.0692 | 99.0 | 198 | 0.4009 | 0.8636 | | 0.0644 | 100.0 | 200 | 0.4002 | 0.8636 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
arpsad18/queenbee_characters
arpsad18
2024-05-22T12:43:36Z
8
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:GraydientPlatformAPI/autism-pony", "base_model:adapter:GraydientPlatformAPI/autism-pony", "license:unknown", "region:us" ]
text-to-image
2024-05-22T12:41:31Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: 'a1m33 woman, ' output: url: images/00012-1257148643.jpeg base_model: GraydientPlatformAPI/autism-pony instance_prompt: A1M33 woman, Q3NBR1D3, old man Y4KF00, 50M3 license: unknown --- # queen bee manhwa <Gallery /> ## Model description manhwa characters queen bee ## Trigger words You should use `A1M33 woman` to trigger the image generation. You should use `Q3NBR1D3` to trigger the image generation. You should use `old man Y4KF00` to trigger the image generation. You should use `50M3` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/arpsad18/queenbee_characters/tree/main) them in the Files & versions tab.
Netta1994/setfit_unique_2k
Netta1994
2024-05-22T12:43:35Z
8
0
setfit
[ "setfit", "safetensors", "mpnet", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:sentence-transformers/paraphrase-mpnet-base-v2", "base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2", "model-index", "region:us" ]
text-classification
2024-05-22T12:43:01Z
--- library_name: setfit tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer base_model: sentence-transformers/paraphrase-mpnet-base-v2 metrics: - accuracy widget: - text: 'I apologize, but I cannot provide information on specific individuals, including their names or personal relationships, as this could potentially violate their privacy and personal boundaries. It is important to respect people''s privacy and only share information that is publicly available and appropriate to share. Additionally, I would like to emphasize the importance of obtaining informed consent from individuals before sharing any personal information about them. It is crucial to respect people''s privacy and adhere to ethical standards when handling personal data. If you have any other questions or concerns, please feel free to ask.' - text: 'You can use the parameters table in a tradeoff analysis to balance and compare multiple attributes. Specifically, it allows you to: 1. Compare different revision configurations of a project. 2. Evaluate product parameters against verification requests. 3. Assess product parameters in relation to product freeze points. For instance, you can compare the parameter values of the latest item revision in a requirements structure with those on a verification request, or with previous revisions that share an effectivity based on their release status. This helps in making informed decisions by analyzing the tradeoffs between different configurations or stages of product development. If you need further assistance or have more questions, feel free to ask.' - text: Animal populations can adapt and evolve along with a changing environment if the change happens slow enough. Polar bears may be able to adapt to a temperature change over 100000 years, but not be able to adapt to the same temperature change over 1000 years. Since this recent anthropogenic driven change is happening faster than any natural temperature change, so I would say they are in danger in the wild. I guess we will be able to see them in zoos though. - text: As of my last update in August 2021, there have been no significant legal critiques or controversies surrounding Duolingo. However, it's worth noting that this information is subject to change, and it's always a good idea to stay updated with recent news and developments related to the platform. - text: 'The author clearly cites it as a Reddit thread. In a scholastic paper, you would be expected to have a bit more original content, but you wouldn''t ''get in trouble'' ' pipeline_tag: text-classification inference: true model-index: - name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2 results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 0.9840425531914894 name: Accuracy --- # SetFit with sentence-transformers/paraphrase-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 2 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 1.0 | <ul><li>'Based solely on the given details, it is not feasible to ascertain the physical impacts on the body when an individual is fatigued and subsequently encounters a surge of energy. The provided data primarily concentrates on pH and CO levels in various contexts such as conductivity, soil, water, and culture vessels. Nevertheless, it does not'</li><li>'To get a ShoutOut to pop up monthly, you would need to set the frequency options for the ShoutOut to trigger once a month. However, the provided contexts only mention frequency options such as "Once," "Once a Day," and "Always." There is no direct mention of a monthly frequency option in the provided contexts.\n\nGiven this information, the answer to your question based on the provided contexts is: "I\'m sorry, I\'m not sure how to answer your question. Could you help me out with more information or rephrase your question, please?'</li><li>"I can see how having the credit card details of a business would provide a deeper understanding of their expenditures. Yet, releasing information such as credit card numbers is strictly against privacy policies and regulations. It's illegal, unethical, and a severe breach of trust to share such confidential details."</li></ul> | | 0.0 | <ul><li>'pRect is an object that contains the x, y, width, and height properties. It is used to determine the index of the object in the nodes array and to insert the object into the nodes object.'</li><li>'Yes, you can search an outside knowledge base using the keywords a user searched for in the player menu. WalkMe offers a Search Provider Integration feature that allows you to supplement your WalkMe items with your existing knowledge base or support center resources. Once enabled, a search performed within the WalkMe Widget will yield results from the specified domains, showing your existing content alongside your WalkMe content. The current supported search providers for this integration are Zendesk, Desk, Bing, and Google. If your current search provider is not on the supported list, please reach out to your Account Manager for further assistance. For more information on how to set up the Search Provider Integration, please refer to our Support article. How else can I assist you today?'</li><li>'Write a precise answer to "how to export homepage to pdf" only based on "KB12345". Only when absolutely confident that If the information is not present in the "KB12345", respond with Answer Not Found.'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.9840 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("Netta1994/setfit_unique_2k") # Run inference preds = model("The author clearly cites it as a Reddit thread. In a scholastic paper, you would be expected to have a bit more original content, but you wouldn't 'get in trouble' ") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 1 | 89.6623 | 412 | | Label | Training Sample Count | |:------|:----------------------| | 0.0 | 1454 | | 1.0 | 527 | ### Training Hyperparameters - batch_size: (16, 16) - num_epochs: (1, 1) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 20 - body_learning_rate: (2e-05, 2e-05) - head_learning_rate: 2e-05 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0002 | 1 | 0.3718 | - | | 0.0101 | 50 | 0.2723 | - | | 0.0202 | 100 | 0.1298 | - | | 0.0303 | 150 | 0.091 | - | | 0.0404 | 200 | 0.046 | - | | 0.0505 | 250 | 0.0348 | - | | 0.0606 | 300 | 0.0208 | - | | 0.0707 | 350 | 0.0044 | - | | 0.0808 | 400 | 0.0041 | - | | 0.0909 | 450 | 0.0046 | - | | 0.1009 | 500 | 0.0007 | - | | 0.1110 | 550 | 0.0004 | - | | 0.1211 | 600 | 0.0601 | - | | 0.1312 | 650 | 0.0006 | - | | 0.1413 | 700 | 0.0006 | - | | 0.1514 | 750 | 0.0661 | - | | 0.1615 | 800 | 0.0002 | - | | 0.1716 | 850 | 0.0009 | - | | 0.1817 | 900 | 0.0002 | - | | 0.1918 | 950 | 0.0017 | - | | 0.2019 | 1000 | 0.0007 | - | | 0.2120 | 1050 | 0.0606 | - | | 0.2221 | 1100 | 0.0001 | - | | 0.2322 | 1150 | 0.0004 | - | | 0.2423 | 1200 | 0.0029 | - | | 0.2524 | 1250 | 0.0001 | - | | 0.2625 | 1300 | 0.0001 | - | | 0.2726 | 1350 | 0.0001 | - | | 0.2827 | 1400 | 0.0047 | - | | 0.2928 | 1450 | 0.0 | - | | 0.3028 | 1500 | 0.0 | - | | 0.3129 | 1550 | 0.0 | - | | 0.3230 | 1600 | 0.0 | - | | 0.3331 | 1650 | 0.0001 | - | | 0.3432 | 1700 | 0.0004 | - | | 0.3533 | 1750 | 0.0 | - | | 0.3634 | 1800 | 0.0 | - | | 0.3735 | 1850 | 0.0 | - | | 0.3836 | 1900 | 0.0 | - | | 0.3937 | 1950 | 0.0 | - | | 0.4038 | 2000 | 0.0 | - | | 0.4139 | 2050 | 0.0 | - | | 0.4240 | 2100 | 0.0 | - | | 0.4341 | 2150 | 0.0 | - | | 0.4442 | 2200 | 0.0 | - | | 0.4543 | 2250 | 0.0001 | - | | 0.4644 | 2300 | 0.0 | - | | 0.4745 | 2350 | 0.0 | - | | 0.4846 | 2400 | 0.0 | - | | 0.4946 | 2450 | 0.0 | - | | 0.5047 | 2500 | 0.0 | - | | 0.5148 | 2550 | 0.0 | - | | 0.5249 | 2600 | 0.0 | - | | 0.5350 | 2650 | 0.0 | - | | 0.5451 | 2700 | 0.0 | - | | 0.5552 | 2750 | 0.0001 | - | | 0.5653 | 2800 | 0.0 | - | | 0.5754 | 2850 | 0.0 | - | | 0.5855 | 2900 | 0.0 | - | | 0.5956 | 2950 | 0.0 | - | | 0.6057 | 3000 | 0.0 | - | | 0.6158 | 3050 | 0.0 | - | | 0.6259 | 3100 | 0.0002 | - | | 0.6360 | 3150 | 0.0 | - | | 0.6461 | 3200 | 0.0 | - | | 0.6562 | 3250 | 0.0002 | - | | 0.6663 | 3300 | 0.0 | - | | 0.6764 | 3350 | 0.0 | - | | 0.6865 | 3400 | 0.0 | - | | 0.6965 | 3450 | 0.0 | - | | 0.7066 | 3500 | 0.0 | - | | 0.7167 | 3550 | 0.0 | - | | 0.7268 | 3600 | 0.0 | - | | 0.7369 | 3650 | 0.0 | - | | 0.7470 | 3700 | 0.0 | - | | 0.7571 | 3750 | 0.0 | - | | 0.7672 | 3800 | 0.0 | - | | 0.7773 | 3850 | 0.0 | - | | 0.7874 | 3900 | 0.0 | - | | 0.7975 | 3950 | 0.0 | - | | 0.8076 | 4000 | 0.0 | - | | 0.8177 | 4050 | 0.0 | - | | 0.8278 | 4100 | 0.0 | - | | 0.8379 | 4150 | 0.0 | - | | 0.8480 | 4200 | 0.0 | - | | 0.8581 | 4250 | 0.0 | - | | 0.8682 | 4300 | 0.0 | - | | 0.8783 | 4350 | 0.0 | - | | 0.8884 | 4400 | 0.0 | - | | 0.8984 | 4450 | 0.0 | - | | 0.9085 | 4500 | 0.0 | - | | 0.9186 | 4550 | 0.0 | - | | 0.9287 | 4600 | 0.0 | - | | 0.9388 | 4650 | 0.0 | - | | 0.9489 | 4700 | 0.0 | - | | 0.9590 | 4750 | 0.0 | - | | 0.9691 | 4800 | 0.0 | - | | 0.9792 | 4850 | 0.0 | - | | 0.9893 | 4900 | 0.0 | - | | 0.9994 | 4950 | 0.0 | - | ### Framework Versions - Python: 3.10.14 - SetFit: 1.0.3 - Sentence Transformers: 2.7.0 - Transformers: 4.40.1 - PyTorch: 2.2.0+cu121 - Datasets: 2.19.1 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
JunhaoZhuang/PowerPaint-v2-1
JunhaoZhuang
2024-05-22T12:42:59Z
0
50
diffusers
[ "diffusers", "safetensors", "arxiv:2312.03594", "license:apache-2.0", "region:us" ]
null
2024-05-22T07:50:09Z
--- license: apache-2.0 --- # A Task is Worth One Word: Learning with Task Prompts for High-Quality Versatile Image Inpainting ### [Project Page](https://powerpaint.github.io/) | [Paper](https://arxiv.org/abs/2312.03594) | [Online Demo(OpenXlab)](https://openxlab.org.cn/apps/detail/rangoliu/PowerPaint#basic-information) This README provides a step-by-step guide to download the repository, set up the required virtual environment named "PowerPaint" using conda, and run PowerPaint with or without ControlNet. **Feel free to try it and give it a star!**:star: ## 🚀 News **May 22, 2024**:fire: - We open source the model weights for PowerPaint v2-1. [![HuggingFace Model](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Model-blue)](https://huggingface.co/JunhaoZhuang/PowerPaint-v2-1) **April 7, 2024**:fire: - We open source the model weights and code for PowerPaint v2. [![Open in OpenXLab](https://cdn-static.openxlab.org.cn/header/openxlab_models.svg)](https://openxlab.org.cn/models/detail/zhuangjunhao/PowerPaint_v2) [![HuggingFace Model](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Model-blue)](https://huggingface.co/JunhaoZhuang/PowerPaint_v2) **April 6, 2024**: - We have retrained a new PowerPaint, taking inspiration from Brushnet. The [Online Demo](https://openxlab.org.cn/apps/detail/rangoliu/PowerPaint) has been updated accordingly. **We plan to release the model weights and code as open source in the next few days**. - Tips: We preserve the cross-attention layer that was deleted by BrushNet for the task prompts input. | | Object insertion | Object Removal|Shape-guided Object Insertion|Outpainting| |-----------------|-----------------|-----------------|-----------------|-----------------| | Original Image| ![cropinput](https://github.com/Sanster/IOPaint/assets/108931120/bf91a1e8-8eaf-4be6-b47d-b8e43c9d182a)|![cropinput](https://github.com/Sanster/IOPaint/assets/108931120/c7e56119-aa57-4761-b6aa-56f8a0b72456)|![image](https://github.com/Sanster/IOPaint/assets/108931120/cbbfe84e-2bf1-425b-8349-f7874f2e978c)|![cropinput](https://github.com/Sanster/IOPaint/assets/108931120/134bb707-0fe5-4d22-a0ca-d440fa521365)| | Output| ![image](https://github.com/Sanster/IOPaint/assets/108931120/ee777506-d336-4275-94f6-31abf9521866)| ![image](https://github.com/Sanster/IOPaint/assets/108931120/e9d8cf6c-13b8-443c-b327-6f27da54cda6)|![image](https://github.com/Sanster/IOPaint/assets/108931120/cc3008c9-37dd-4d98-ad43-58f67be872dc)|![image](https://github.com/Sanster/IOPaint/assets/108931120/18d8ca23-e6d7-4680-977f-e66341312476)| **December 22, 2023**:wrench: - The logical error in loading ControlNet has been rectified. The `gradio_PowerPaint.py` file and [Online Demo](https://openxlab.org.cn/apps/detail/rangoliu/PowerPaint) have also been updated. **December 18, 2023** *Enhanced PowerPaint Model* - We are delighted to announce the release of more stable model weights. These refined weights can now be accessed on [Hugging Face](https://huggingface.co/JunhaoZhuang/PowerPaint-v1/tree/main). The `gradio_PowerPaint.py` file and [Online Demo](https://openxlab.org.cn/apps/detail/rangoliu/PowerPaint) have also been updated as part of this release. ________________ <img src='https://github.com/open-mmlab/mmagic/assets/12782558/acd01391-c73f-4997-aafd-0869aebcc915'/> ## Getting Started ```bash # Clone the Repository git clone https://github.com/zhuang2002/PowerPaint.git # Navigate to the Repository cd projects/powerpaint # Create Virtual Environment with Conda conda create --name PowerPaint python=3.9 conda activate PowerPaint # Install Dependencies pip install -r requirements.txt ``` ## PowerPaint v2 ```bash python gradio_PowerPaint_BrushNet.py ``` ## PowerPaint v1 ```bash # Create Models Folder mkdir models # Set up Git LFS git lfs install # Clone PowerPaint Model git lfs clone https://huggingface.co/JunhaoZhuang/PowerPaint-v1/ ./models python gradio_PowerPaint.py ``` This command will launch the Gradio interface for PowerPaint. Feel free to explore and edit images with PowerPaint! ## BibTeX ``` @misc{zhuang2023task, title={A Task is Worth One Word: Learning with Task Prompts for High-Quality Versatile Image Inpainting}, author={Junhao Zhuang and Yanhong Zeng and Wenran Liu and Chun Yuan and Kai Chen}, year={2023}, eprint={2312.03594}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
westlake-repl/ProTrek_650M_UniRef50
westlake-repl
2024-05-22T12:38:44Z
0
4
null
[ "arxiv:2103.00020", "license:mit", "region:us" ]
null
2024-05-22T02:48:33Z
--- license: mit --- **Github repo: https://github.com/westlake-repl/ProTrek** ## Overview ProTrek is a multimodal model that integrates protein sequence, protein structure, and text information for better protein understanding. It adopts contrastive learning to learn the representations of protein sequence and structure. During the pre-training phase, we calculate the InfoNCE loss for each two modalities as [CLIP](https://arxiv.org/abs/2103.00020) does. ## Model architecture **Protein sequence encoder**: [esm2_t33_650M_UR50D](https://huggingface.co/facebook/esm2_t33_650M_UR50D) **Protein structure encoder**: foldseek_t30_150M (identical architecture with esm2 except that the vocabulary only contains 3Di tokens) **Text encoder**: [BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext) ## Obtain embeddings and calculate similarity score (please clone our repo first) ``` import torch from model.ProtTrek.protrek_trimodal_model import ProTrekTrimodalModel from utils.foldseek_util import get_struc_seq # Load model config = { "protein_config": "weights/ProTrek_650M_UniRef50/esm2_t33_650M_UR50D", "text_config": "weights/ProTrek_650M_UniRef50/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext", "structure_config": "weights/ProTrek_650M_UniRef50/foldseek_t30_150M", "load_protein_pretrained": False, "load_text_pretrained": False, "from_checkpoint": "weights/ProTrek_650M_UniRef50/ProTrek_650M_UniRef50.pt" } device = "cuda" model = ProTrekTrimodalModel(**config).eval().to(device) # Load protein and text pdb_path = "example/8ac8.cif" seqs = get_struc_seq("bin/foldseek", pdb_path, ["A"])["A"] aa_seq = seqs[0] foldseek_seq = seqs[1].lower() text = "Replication initiator in the monomeric form, and autogenous repressor in the dimeric form." with torch.no_grad(): # Obtain protein sequence embedding seq_embedding = model.get_protein_repr([aa_seq]) print("Protein sequence embedding shape:", seq_embedding.shape) # Obtain protein structure embedding struc_embedding = model.get_structure_repr([foldseek_seq]) print("Protein structure embedding shape:", struc_embedding.shape) # Obtain text embedding text_embedding = model.get_text_repr([text]) print("Text embedding shape:", text_embedding.shape) # Calculate similarity score between protein sequence and structure seq_struc_score = seq_embedding @ struc_embedding.T / model.temperature print("Similarity score between protein sequence and structure:", seq_struc_score.item()) # Calculate similarity score between protein sequence and text seq_text_score = seq_embedding @ text_embedding.T / model.temperature print("Similarity score between protein sequence and text:", seq_text_score.item()) # Calculate similarity score between protein structure and text struc_text_score = struc_embedding @ text_embedding.T / model.temperature print("Similarity score between protein structure and text:", struc_text_score.item()) ```
egoist000/yelp_roberta_sentiment_analysis
egoist000
2024-05-22T12:36:49Z
62
0
transformers
[ "transformers", "tf", "roberta", "text-classification", "generated_from_keras_callback", "base_model:FacebookAI/roberta-base", "base_model:finetune:FacebookAI/roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-11T16:52:11Z
--- license: mit base_model: FacebookAI/roberta-base tags: - generated_from_keras_callback model-index: - name: egoist000/yelp_roberta_sentiment_analysis results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # egoist000/yelp_roberta_sentiment_analysis This model is a fine-tuned version of [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4439 - Validation Loss: 0.4447 - Train Accuracy: 0.8077 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 172800, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 19200, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.1} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.5457 | 0.4519 | 0.807 | 0 | | 0.4439 | 0.4447 | 0.8077 | 1 | ### Framework versions - Transformers 4.39.3 - TensorFlow 2.15.0 - Datasets 2.18.0 - Tokenizers 0.15.2
jupiterjjj/LawGLM3
jupiterjjj
2024-05-22T12:35:17Z
6
0
transformers
[ "transformers", "safetensors", "chatglm", "feature-extraction", "custom_code", "dataset:ShengbinYue/DISC-Law-SFT", "license:mit", "region:us" ]
feature-extraction
2024-05-22T11:45:57Z
--- license: mit datasets: - ShengbinYue/DISC-Law-SFT --- This is a model fine-tuned based on ChatGlm3 by with LoRA.
sj21867/ai_art_exp1_vit_final
sj21867
2024-05-22T12:33:48Z
193
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-22T11:23:31Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer metrics: - accuracy model-index: - name: ai_art_exp1_vit_final results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ai_art_exp1_vit_final This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Accuracy: {'accuracy': 0.9946666666666667} - Overall Accuracy: 0.9947 - Loss: 0.0231 - Human Accuracy: 0.99 - Ld Accuracy: 0.998 - Sd Accuracy: 0.996 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Accuracy | Overall Accuracy | Validation Loss | Human Accuracy | Ld Accuracy | Sd Accuracy | |:-------------:|:------:|:----:|:--------------------------------:|:----------------:|:---------------:|:--------------:|:-----------:|:-----------:| | 0.198 | 0.992 | 93 | {'accuracy': 0.9506666666666667} | 0.9507 | 0.1906 | 0.8548 | 0.9981 | 0.9959 | | 0.0647 | 1.9947 | 187 | {'accuracy': 0.9793333333333333} | 0.9793 | 0.0811 | 0.9489 | 0.9923 | 0.9959 | | 0.0395 | 2.9973 | 281 | {'accuracy': 0.988} | 0.988 | 0.0567 | 0.9734 | 0.9904 | 1.0 | | 0.069 | 4.0 | 375 | {'accuracy': 0.9933333333333333} | 0.9933 | 0.0399 | 0.9816 | 1.0 | 0.9980 | | 0.0456 | 4.992 | 468 | {'accuracy': 0.9946666666666667} | 0.9947 | 0.0309 | 0.9877 | 1.0 | 0.9959 | | 0.0324 | 5.9947 | 562 | {'accuracy': 0.9906666666666667} | 0.9907 | 0.0444 | 0.9734 | 1.0 | 0.9980 | | 0.0136 | 6.9973 | 656 | {'accuracy': 0.996} | 0.996 | 0.0234 | 0.9939 | 1.0 | 0.9939 | | 0.0137 | 8.0 | 750 | {'accuracy': 0.9953333333333333} | 0.9953 | 0.0218 | 0.9898 | 0.9962 | 1.0 | | 0.0105 | 8.992 | 843 | {'accuracy': 0.9953333333333333} | 0.9953 | 0.0222 | 0.9877 | 1.0 | 0.9980 | | 0.0111 | 9.92 | 930 | {'accuracy': 0.9986666666666667} | 0.9987 | 0.0122 | 0.9980 | 0.9981 | 1.0 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
Fiie/lagal
Fiie
2024-05-22T12:29:05Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-22T12:29:05Z
--- license: apache-2.0 ---
matthieuzone/STILTONter
matthieuzone
2024-05-22T12:28:44Z
2
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-22T06:16:35Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of sks cheese widget: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - matthieuzone/STILTONter <Gallery /> ## Model description These are matthieuzone/STILTONter LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of sks cheese to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](matthieuzone/STILTONter/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
BilalMuftuoglu/deit-base-distilled-patch16-224-85-fold3
BilalMuftuoglu
2024-05-22T12:25:47Z
8
0
transformers
[ "transformers", "tensorboard", "safetensors", "deit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:facebook/deit-base-distilled-patch16-224", "base_model:finetune:facebook/deit-base-distilled-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-22T12:06:10Z
--- license: apache-2.0 base_model: facebook/deit-base-distilled-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: deit-base-distilled-patch16-224-85-fold3 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9090909090909091 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deit-base-distilled-patch16-224-85-fold3 This model is a fine-tuned version of [facebook/deit-base-distilled-patch16-224](https://huggingface.co/facebook/deit-base-distilled-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3477 - Accuracy: 0.9091 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 2 | 0.7892 | 0.3409 | | No log | 2.0 | 4 | 0.5546 | 0.7727 | | No log | 3.0 | 6 | 0.6493 | 0.7727 | | No log | 4.0 | 8 | 0.6648 | 0.7727 | | 0.6939 | 5.0 | 10 | 0.5187 | 0.7727 | | 0.6939 | 6.0 | 12 | 0.4903 | 0.8182 | | 0.6939 | 7.0 | 14 | 0.5087 | 0.7955 | | 0.6939 | 8.0 | 16 | 0.5789 | 0.7727 | | 0.6939 | 9.0 | 18 | 0.4919 | 0.8409 | | 0.4553 | 10.0 | 20 | 0.4707 | 0.75 | | 0.4553 | 11.0 | 22 | 0.5120 | 0.8182 | | 0.4553 | 12.0 | 24 | 0.4734 | 0.75 | | 0.4553 | 13.0 | 26 | 0.4255 | 0.7727 | | 0.4553 | 14.0 | 28 | 0.3695 | 0.8636 | | 0.3658 | 15.0 | 30 | 0.3848 | 0.8182 | | 0.3658 | 16.0 | 32 | 0.3586 | 0.8409 | | 0.3658 | 17.0 | 34 | 0.4962 | 0.8409 | | 0.3658 | 18.0 | 36 | 0.3645 | 0.8636 | | 0.3658 | 19.0 | 38 | 0.3455 | 0.8864 | | 0.2667 | 20.0 | 40 | 0.3477 | 0.9091 | | 0.2667 | 21.0 | 42 | 0.3275 | 0.8864 | | 0.2667 | 22.0 | 44 | 0.3400 | 0.8864 | | 0.2667 | 23.0 | 46 | 0.3780 | 0.8864 | | 0.2667 | 24.0 | 48 | 0.4243 | 0.8409 | | 0.1794 | 25.0 | 50 | 0.4429 | 0.8409 | | 0.1794 | 26.0 | 52 | 0.5026 | 0.8409 | | 0.1794 | 27.0 | 54 | 0.4811 | 0.8409 | | 0.1794 | 28.0 | 56 | 0.4733 | 0.8182 | | 0.1794 | 29.0 | 58 | 0.4384 | 0.8636 | | 0.1861 | 30.0 | 60 | 0.4354 | 0.9091 | | 0.1861 | 31.0 | 62 | 0.4511 | 0.8864 | | 0.1861 | 32.0 | 64 | 0.3315 | 0.8636 | | 0.1861 | 33.0 | 66 | 0.3100 | 0.8864 | | 0.1861 | 34.0 | 68 | 0.3594 | 0.9091 | | 0.1521 | 35.0 | 70 | 0.4052 | 0.9091 | | 0.1521 | 36.0 | 72 | 0.3878 | 0.8864 | | 0.1521 | 37.0 | 74 | 0.3905 | 0.9091 | | 0.1521 | 38.0 | 76 | 0.4173 | 0.9091 | | 0.1521 | 39.0 | 78 | 0.4774 | 0.9091 | | 0.1333 | 40.0 | 80 | 0.5656 | 0.8864 | | 0.1333 | 41.0 | 82 | 0.5146 | 0.9091 | | 0.1333 | 42.0 | 84 | 0.4158 | 0.8636 | | 0.1333 | 43.0 | 86 | 0.4067 | 0.8636 | | 0.1333 | 44.0 | 88 | 0.4412 | 0.9091 | | 0.1297 | 45.0 | 90 | 0.4733 | 0.9091 | | 0.1297 | 46.0 | 92 | 0.4243 | 0.9091 | | 0.1297 | 47.0 | 94 | 0.4279 | 0.9091 | | 0.1297 | 48.0 | 96 | 0.4020 | 0.9091 | | 0.1297 | 49.0 | 98 | 0.3842 | 0.8636 | | 0.1038 | 50.0 | 100 | 0.3811 | 0.8409 | | 0.1038 | 51.0 | 102 | 0.3947 | 0.8636 | | 0.1038 | 52.0 | 104 | 0.4587 | 0.9091 | | 0.1038 | 53.0 | 106 | 0.4300 | 0.9091 | | 0.1038 | 54.0 | 108 | 0.3804 | 0.8636 | | 0.1101 | 55.0 | 110 | 0.4216 | 0.8636 | | 0.1101 | 56.0 | 112 | 0.3966 | 0.8636 | | 0.1101 | 57.0 | 114 | 0.4216 | 0.9091 | | 0.1101 | 58.0 | 116 | 0.4569 | 0.9091 | | 0.1101 | 59.0 | 118 | 0.4392 | 0.9091 | | 0.1085 | 60.0 | 120 | 0.4479 | 0.9091 | | 0.1085 | 61.0 | 122 | 0.4657 | 0.9091 | | 0.1085 | 62.0 | 124 | 0.5242 | 0.9091 | | 0.1085 | 63.0 | 126 | 0.5626 | 0.9091 | | 0.1085 | 64.0 | 128 | 0.5570 | 0.9091 | | 0.105 | 65.0 | 130 | 0.5035 | 0.9091 | | 0.105 | 66.0 | 132 | 0.4490 | 0.9091 | | 0.105 | 67.0 | 134 | 0.4366 | 0.9091 | | 0.105 | 68.0 | 136 | 0.4416 | 0.8636 | | 0.105 | 69.0 | 138 | 0.4597 | 0.9091 | | 0.0918 | 70.0 | 140 | 0.4795 | 0.8636 | | 0.0918 | 71.0 | 142 | 0.4922 | 0.8636 | | 0.0918 | 72.0 | 144 | 0.5078 | 0.8409 | | 0.0918 | 73.0 | 146 | 0.5089 | 0.8636 | | 0.0918 | 74.0 | 148 | 0.5109 | 0.8636 | | 0.1072 | 75.0 | 150 | 0.5125 | 0.8864 | | 0.1072 | 76.0 | 152 | 0.5267 | 0.8864 | | 0.1072 | 77.0 | 154 | 0.5346 | 0.9091 | | 0.1072 | 78.0 | 156 | 0.5291 | 0.8864 | | 0.1072 | 79.0 | 158 | 0.5188 | 0.8636 | | 0.0895 | 80.0 | 160 | 0.5222 | 0.8636 | | 0.0895 | 81.0 | 162 | 0.5319 | 0.8636 | | 0.0895 | 82.0 | 164 | 0.5475 | 0.8864 | | 0.0895 | 83.0 | 166 | 0.5576 | 0.9091 | | 0.0895 | 84.0 | 168 | 0.5441 | 0.9091 | | 0.0836 | 85.0 | 170 | 0.5266 | 0.8864 | | 0.0836 | 86.0 | 172 | 0.5047 | 0.8864 | | 0.0836 | 87.0 | 174 | 0.4888 | 0.8864 | | 0.0836 | 88.0 | 176 | 0.4824 | 0.8864 | | 0.0836 | 89.0 | 178 | 0.4814 | 0.8864 | | 0.0996 | 90.0 | 180 | 0.4823 | 0.9091 | | 0.0996 | 91.0 | 182 | 0.4826 | 0.9091 | | 0.0996 | 92.0 | 184 | 0.4841 | 0.8864 | | 0.0996 | 93.0 | 186 | 0.4880 | 0.9091 | | 0.0996 | 94.0 | 188 | 0.4879 | 0.9091 | | 0.086 | 95.0 | 190 | 0.4829 | 0.9091 | | 0.086 | 96.0 | 192 | 0.4798 | 0.8864 | | 0.086 | 97.0 | 194 | 0.4811 | 0.8864 | | 0.086 | 98.0 | 196 | 0.4819 | 0.8864 | | 0.086 | 99.0 | 198 | 0.4816 | 0.8864 | | 0.0745 | 100.0 | 200 | 0.4816 | 0.8864 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
matthieuzone/MAROILLESter
matthieuzone
2024-05-22T12:24:15Z
1
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-22T06:11:08Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of sks cheese widget: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - matthieuzone/MAROILLESter <Gallery /> ## Model description These are matthieuzone/MAROILLESter LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of sks cheese to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](matthieuzone/MAROILLESter/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
NassimB/mistral-7b-platypus-lamini-vxxiii-chat-real_augmented_costumer
NassimB
2024-05-22T12:23:31Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-05-22T08:50:47Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer base_model: mistralai/Mistral-7B-v0.1 model-index: - name: mistral-7b-platypus-lamini-vxxiii-chat-real_augmented_costumer results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral-7b-platypus-lamini-vxxiii-chat-real_augmented_costumer This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.1 - Pytorch 2.2.0+cu121 - Datasets 2.14.6 - Tokenizers 0.15.1
GeorgeBredis/Phi-3-mini-128k-instruct-Q4_K_M-GGUF
GeorgeBredis
2024-05-22T12:22:33Z
4
0
null
[ "gguf", "nlp", "code", "llama-cpp", "gguf-my-repo", "text-generation", "en", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-05-22T12:22:21Z
--- language: - en license: mit tags: - nlp - code - llama-cpp - gguf-my-repo license_link: https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/LICENSE pipeline_tag: text-generation widget: - messages: - role: user content: Can you provide ways to eat combinations of bananas and dragonfruits? --- # GeorgeBredis/Phi-3-mini-128k-instruct-Q4_K_M-GGUF This model was converted to GGUF format from [`microsoft/Phi-3-mini-128k-instruct`](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo GeorgeBredis/Phi-3-mini-128k-instruct-Q4_K_M-GGUF --model phi-3-mini-128k-instruct.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo GeorgeBredis/Phi-3-mini-128k-instruct-Q4_K_M-GGUF --model phi-3-mini-128k-instruct.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m phi-3-mini-128k-instruct.Q4_K_M.gguf -n 128 ```
westlake-repl/ProTrek_35M_UniRef50
westlake-repl
2024-05-22T12:22:23Z
0
0
null
[ "arxiv:2103.00020", "license:mit", "region:us" ]
null
2024-05-22T02:45:15Z
--- license: mit --- **Github repo: https://github.com/westlake-repl/ProTrek** ## Overview ProTrek is a multimodal model that integrates protein sequence, protein structure, and text information for better protein understanding. It adopts contrastive learning to learn the representations of protein sequence and structure. During the pre-training phase, we calculate the InfoNCE loss for each two modalities as [CLIP](https://arxiv.org/abs/2103.00020) does. ## Model architecture **Protein sequence encoder**: [esm2_t12_35M_UR50D](https://huggingface.co/facebook/esm2_t12_35M_UR50D) **Protein structure encoder**: foldseek_t12_35M (identical architecture with esm2 except that the vocabulary only contains 3Di tokens) **Text encoder**: [BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext) ## Obtain embeddings and calculate similarity score (please clone our repo first) ``` import torch from model.ProtTrek.protrek_trimodal_model import ProTrekTrimodalModel from utils.foldseek_util import get_struc_seq # Load model config = { "protein_config": "weights/ProTrek_35M_UniRef50/esm2_t12_35M_UR50D", "text_config": "weights/ProTrek_35M_UniRef50/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext", "structure_config": "weights/ProTrek_35M_UniRef50/foldseek_t12_35M", "load_protein_pretrained": False, "load_text_pretrained": False, "from_checkpoint": "weights/ProTrek_35M_UniRef50/ProTrek_35M_UniRef50.pt" } device = "cuda" model = ProTrekTrimodalModel(**config).eval().to(device) # Load protein and text pdb_path = "example/8ac8.cif" seqs = get_struc_seq("bin/foldseek", pdb_path, ["A"])["A"] aa_seq = seqs[0] foldseek_seq = seqs[1].lower() text = "Replication initiator in the monomeric form, and autogenous repressor in the dimeric form." with torch.no_grad(): # Obtain protein sequence embedding seq_embedding = model.get_protein_repr([aa_seq]) print("Protein sequence embedding shape:", seq_embedding.shape) # Obtain protein structure embedding struc_embedding = model.get_structure_repr([foldseek_seq]) print("Protein structure embedding shape:", struc_embedding.shape) # Obtain text embedding text_embedding = model.get_text_repr([text]) print("Text embedding shape:", text_embedding.shape) # Calculate similarity score between protein sequence and structure seq_struc_score = seq_embedding @ struc_embedding.T / model.temperature print("Similarity score between protein sequence and structure:", seq_struc_score.item()) # Calculate similarity score between protein sequence and text seq_text_score = seq_embedding @ text_embedding.T / model.temperature print("Similarity score between protein sequence and text:", seq_text_score.item()) # Calculate similarity score between protein structure and text struc_text_score = struc_embedding @ text_embedding.T / model.temperature print("Similarity score between protein structure and text:", struc_text_score.item()) """ Protein sequence embedding shape: torch.Size([1, 1024]) Protein structure embedding shape: torch.Size([1, 1024]) Text embedding shape: torch.Size([1, 1024]) Similarity score between protein sequence and structure: 38.83826446533203 Similarity score between protein sequence and text: 17.90523338317871 Similarity score between protein structure and text: 18.044755935668945 """ ```
matthieuzone/REBLOCHONter
matthieuzone
2024-05-22T12:19:37Z
1
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-22T06:15:21Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of sks cheese widget: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - matthieuzone/REBLOCHONter <Gallery /> ## Model description These are matthieuzone/REBLOCHONter LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of sks cheese to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](matthieuzone/REBLOCHONter/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
giantdev/dippy-cHGMu-sn11m3
giantdev
2024-05-22T12:15:48Z
126
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T12:13:55Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
george6/NER
george6
2024-05-22T12:14:29Z
163
0
transformers
[ "transformers", "safetensors", "roberta", "token-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-05-22T08:36:09Z
--- license: apache-2.0 ---
matthieuzone/MUNSTERter
matthieuzone
2024-05-22T12:14:23Z
1
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-22T06:13:12Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of sks cheese widget: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - matthieuzone/MUNSTERter <Gallery /> ## Model description These are matthieuzone/MUNSTERter LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of sks cheese to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](matthieuzone/MUNSTERter/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
giantdev/dippy-oziWu-sn11m9
giantdev
2024-05-22T12:12:55Z
126
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T12:11:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Ramikan-BR/tinyllama-coder-py-4bit_LORA-v4
Ramikan-BR
2024-05-22T12:12:41Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/tinyllama-chat-bnb-4bit", "base_model:finetune:unsloth/tinyllama-chat-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-22T12:12:20Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/tinyllama-chat-bnb-4bit --- # Uploaded model - **Developed by:** Ramikan-BR - **License:** apache-2.0 - **Finetuned from model :** unsloth/tinyllama-chat-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
soumagok/flan-t5-base-xsum
soumagok
2024-05-22T12:11:33Z
113
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google/flan-t5-base", "base_model:finetune:google/flan-t5-base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-22T12:10:57Z
--- license: apache-2.0 base_model: google/flan-t5-base tags: - generated_from_trainer model-index: - name: flan-t5-base-xsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/jadavpur/huggingface/runs/d9t15atc) [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/jadavpur/huggingface/runs/d9t15atc) # flan-t5-base-xsum This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.9268 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.1058 | 1.0 | 125 | 1.9193 | | 2.1462 | 2.0 | 250 | 1.9229 | | 1.7736 | 3.0 | 375 | 1.9268 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.1.2 - Datasets 2.19.1 - Tokenizers 0.19.1
AndrewDOrlov/bert_for_prof_roles_128_all_labels
AndrewDOrlov
2024-05-22T12:10:33Z
108
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-22T09:35:36Z
--- tags: - generated_from_trainer metrics: - f1 - accuracy model-index: - name: bert_for_prof_roles_128_all_labels results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_for_prof_roles_128_all_labels This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0059 - F1: 0.8558 - Roc Auc: 0.9126 - Accuracy: 0.8233 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|:--------:| | 0.0119 | 1.0 | 7016 | 0.0108 | 0.7427 | 0.8192 | 0.6327 | | 0.0076 | 2.0 | 14032 | 0.0077 | 0.8118 | 0.8790 | 0.7549 | | 0.006 | 3.0 | 21048 | 0.0067 | 0.8352 | 0.8958 | 0.7874 | | 0.005 | 4.0 | 28064 | 0.0062 | 0.8459 | 0.9058 | 0.8074 | | 0.0046 | 5.0 | 35080 | 0.0060 | 0.8525 | 0.9112 | 0.8190 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
safehavens/safehavens_chatbot
safehavens
2024-05-22T12:10:18Z
7
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "medical", "therapy", "en", "dataset:ap00rvmohit/Adolescent_Therapy_Dataset", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T11:30:17Z
--- license: mit datasets: - ap00rvmohit/Adolescent_Therapy_Dataset language: - en tags: - medical - therapy --- # README ## Overview Safehavens Chatbot is a fine-tuned version of the Llama 2 open-source language model, specifically designed to assist in therapeutic and mental health support scenarios. This model leverages advanced natural language processing capabilities to provide empathetic, insightful, and supportive responses, making it a useful tool for therapists, counselors, and individuals seeking mental health support. ## Features - **Empathetic Response Generation**: Safehavens Chatbot generates responses that are empathetic and understanding, helping users feel heard and supported. - **Therapeutic Techniques**: Incorporates various therapeutic techniques such as cognitive-behavioral therapy (CBT), mindfulness, and motivational interviewing. - **Customizable Interactions**: Allows customization to better align with specific therapeutic approaches and individual client needs. - **Scalable Support**: Provides a scalable solution to support mental health professionals by offering preliminary support and engagement with clients. ## Training Dataset Safehavens Chatbot is trained on a curated Adoescent Therapy dataset ## Ethical Considerations ### Confidentiality and Privacy - **Data Anonymization**: All training data is synthesized and therefore no therapist - client data is involved. - **Usage Guidelines**: Users are encouraged to use TherapyGPT as a supplementary tool, not as a replacement for professional therapy. ### Bias and Fairness - **Bias Mitigation**: Efforts have been made to minimize biases in the training data, but users should remain aware of potential biases in AI-generated responses. - **Inclusivity**: The model is designed to be inclusive and supportive of diverse backgrounds and identities. ### Risk Factors - **Not a Substitute for Professional Help**: Safehavens Chatbot is not a licensed therapist and should not replace professional mental health services. It is intended to provide support and should be used in conjunction with professional guidance. - **Risk of Misuse**: There is a risk of misuse in sensitive scenarios. Users should be cautious and ensure that the tool is used ethically and responsibly. - **Monitoring and Feedback**: Continuous monitoring and user feedback are essential to improve the model's performance and address any issues that arise. ## Usage To use Safehavens Chatbot, follow these steps: 1. **Installation**: Ensure you have the necessary software and dependencies installed to run Llama 2-based models. 2. **Loading the Model**: Load the TherapyGPT model using your preferred AI framework. 3. **Customization**: Customize the interaction settings based on the specific needs of your therapy sessions or support requirements. 4. **Engage**: Start interacting with TherapyGPT, keeping in mind the ethical guidelines and limitations. ## Contributions We welcome contributions from the community to help improve Safehavens Chatbot. --- By using Safehavens Chatbot, you agree to adhere to the ethical guidelines and acknowledge the limitations and risks associated with AI-generated therapeutic support.
giantdev/dippy-obnTB-sn11m1
giantdev
2024-05-22T12:10:11Z
125
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T12:08:21Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ilhemhmz752/FineTuned-Llama-2-AgroBot
ilhemhmz752
2024-05-22T12:09:17Z
0
0
peft
[ "peft", "pytorch", "llama", "region:us" ]
null
2024-05-22T11:23:26Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0
lagoma/tutorial
lagoma
2024-05-22T12:08:56Z
0
0
null
[ "arxiv:1910.09700", "region:us" ]
null
2024-05-22T12:03:46Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
borakaragul/blip2-opt-2.7b-ffhq-text-descriptor-V2-adapters
borakaragul
2024-05-22T12:07:34Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-22T12:07:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hgnoi/5GKDlvdApgoDBdDD
hgnoi
2024-05-22T12:07:14Z
126
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T12:05:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
matthieuzone/EMMENTALter
matthieuzone
2024-05-22T12:06:47Z
1
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-22T06:09:02Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of sks cheese widget: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - matthieuzone/EMMENTALter <Gallery /> ## Model description These are matthieuzone/EMMENTALter LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of sks cheese to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](matthieuzone/EMMENTALter/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
matthieuzone/FOURME_D_AMBERTter
matthieuzone
2024-05-22T12:06:41Z
4
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-22T06:10:00Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of sks cheese widget: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - matthieuzone/FOURME_D_AMBERTter <Gallery /> ## Model description These are matthieuzone/FOURME_D_AMBERTter LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of sks cheese to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](matthieuzone/FOURME_D_AMBERTter/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
hgnoi/ZD5VnWn25O6ZVl99
hgnoi
2024-05-22T12:06:29Z
126
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T12:04:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
falan42/gemma_2b_soda_mark12
falan42
2024-05-22T12:03:18Z
78
1
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/gemma-2b-it-bnb-4bit", "base_model:quantized:unsloth/gemma-2b-it-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-22T12:01:46Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma - trl - sft base_model: unsloth/gemma-2b-it-bnb-4bit --- # Uploaded model - **Developed by:** emir12 - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-2b-it-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
hgnoi/YStYdvWcYNEIllgJ
hgnoi
2024-05-22T12:02:03Z
126
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T12:00:23Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
matthieuzone/CHEVREter
matthieuzone
2024-05-22T12:01:51Z
1
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-22T06:08:29Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of sks cheese widget: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - matthieuzone/CHEVREter <Gallery /> ## Model description These are matthieuzone/CHEVREter LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of sks cheese to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](matthieuzone/CHEVREter/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
matthieuzone/VACHERINter
matthieuzone
2024-05-22T11:57:49Z
2
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-22T06:17:25Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of sks cheese widget: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - matthieuzone/VACHERINter <Gallery /> ## Model description These are matthieuzone/VACHERINter LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of sks cheese to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](matthieuzone/VACHERINter/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
zakyzaidan/poca-SoccerTwos
zakyzaidan
2024-05-22T11:57:22Z
3
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2024-05-22T11:54:44Z
--- library_name: ml-agents tags: - SoccerTwos - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: zakyzaidan/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
RichardErkhov/beberik_-_Nyxene-v3-11B-4bits
RichardErkhov
2024-05-22T11:55:05Z
76
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-22T11:46:52Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Nyxene-v3-11B - bnb 4bits - Model creator: https://huggingface.co/beberik/ - Original model: https://huggingface.co/beberik/Nyxene-v3-11B/ Original model description: --- license: cc-by-nc-4.0 tags: - merge model-index: - name: Nyxene-v3-11B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 69.62 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v3-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 85.33 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v3-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.75 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v3-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 60.91 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v3-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 80.19 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v3-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 63.53 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v3-11B name: Open LLM Leaderboard --- ## Description This repo contains bf16 files of Nyxene-v1-11B. Just new version with some new things. ## Model used - [Intel/neural-chat-7b-v3-3-Slerp](https://huggingface.co/Intel/neural-chat-7b-v3-3-Slerp) - [AIDC-ai-business/Marcoroni-7B-v3](https://huggingface.co/AIDC-ai-business/Marcoroni-7B-v3) - [rwitz/go-bruins-v2](https://huggingface.co/rwitz/go-bruins-v2) - [chargoddard/loyal-piano-m7-cdpo](https://huggingface.co/chargoddard/loyal-piano-m7-cdpo) ## Prompt template Just use chatml. ## The secret sauce go-bruins-loyal-piano-11B : ``` slices: - sources: - model: rwitz/go-bruins-v2 layer_range: [0, 24] - sources: - model: chargoddard/loyal-piano-m7-cdpo layer_range: [8, 32] merge_method: passthrough dtype: bfloat16 ``` neural-marcoroni-11B : ``` slices: - sources: - model: AIDC-ai-business/Marcoroni-7B-v3 layer_range: [0, 24] - sources: - model: Intel/neural-chat-7b-v3-3-Slerp layer_range: [8, 32] merge_method: passthrough dtype: bfloat16 ``` Nyxene-11B : ``` slices: - sources: - model: "./go-bruins-loyal-piano-11B" layer_range: [0, 48] - model: "./neural-marcoroni-11B" layer_range: [0, 48] merge_method: slerp base_model: "./go-bruins-loyal-piano-11B" parameters: t: - filter: lm_head value: [0.5] - filter: embed_tokens value: [0.75] - filter: self_attn value: [0.75, 0.25] - filter: mlp value: [0.25, 0.75] - filter: layernorm value: [0.5, 0.5] - filter: modelnorm value: [0.5] - value: 0.5 # fallback for rest of tensors dtype: bfloat16 ``` I use [mergekit](https://github.com/cg123/mergekit) for all the manipulation told here. Thanks to the [Undi95](https://huggingface.co/Undi95) for the original [11B mistral merge](https://huggingface.co/Undi95/Mistral-11B-OmniMix) recipe. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_beberik__Nyxene-v3-11B) | Metric |Value| |---------------------------------|----:| |Avg. |70.72| |AI2 Reasoning Challenge (25-Shot)|69.62| |HellaSwag (10-Shot) |85.33| |MMLU (5-Shot) |64.75| |TruthfulQA (0-shot) |60.91| |Winogrande (5-shot) |80.19| |GSM8k (5-shot) |63.53|
yh1306/abc
yh1306
2024-05-22T11:52:21Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-22T11:49:43Z
--- license: apache-2.0 ---
syahrilz/rilssssss
syahrilz
2024-05-22T11:51:10Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-22T11:51:10Z
--- license: apache-2.0 ---
mesa44/rl_course_vizdoom_health_gathering_supreme
mesa44
2024-05-22T11:48:23Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-05-22T11:48:17Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 12.42 +/- 6.30 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r mesa44/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
imagepipeline/spitroast
imagepipeline
2024-05-22T11:46:42Z
0
0
null
[ "imagepipeline", "imagepipeline.io", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-05-22T11:46:40Z
--- license: creativeml-openrail-m tags: - imagepipeline - imagepipeline.io - text-to-image - ultra-realistic pinned: false pipeline_tag: text-to-image --- ## spitroast <img src="https://via.placeholder.com/468x300?text=App+Screenshot+Here" alt="Generated on Image Pipeline" style="border-radius: 10px;"> **This lora model is uploaded on [imagepipeline.io](https://imagepipeline.io/)** Model details - spitroast [![Try this model](https://img.shields.io/badge/try_this_model-image_pipeline-BD9319)](https://imagepipeline.io/models/spitroast?id=a097d16e-7407-4ec4-b62b-9e2f6b0e52a0/) ## How to try this model ? You can try using it locally or send an API call to test the output quality. Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/). No payment required. Coding in `php` `javascript` `node` etc ? Checkout our documentation [![documentation](https://img.shields.io/badge/documentation-image_pipeline-blue)](https://docs.imagepipeline.io/docs/introduction) ```python import requests import json url = "https://imagepipeline.io/sd/text2image/v1/run" payload = json.dumps({ "model_id": "sd1.5", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": false, "guidance_scale": 7.5, "multi_lingual": "no", "embeddings": "", "lora_models": "a097d16e-7407-4ec4-b62b-9e2f6b0e52a0", "lora_weights": "0.5" }) headers = { 'Content-Type': 'application/json', 'API-Key': 'your_api_key' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) } ``` Get more ready to use `MODELS` like this for `SD 1.5` and `SDXL` : [![All models](https://img.shields.io/badge/Get%20All%20Models-image_pipeline-BD9319)](https://imagepipeline.io/models) ### API Reference #### Generate Image ```http https://api.imagepipeline.io/sd/text2image/v1 ``` | Headers | Type | Description | |:----------------------| :------- |:-------------------------------------------------------------------------------------------------------------------| | `API-Key` | `str` | Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/) | | `Content-Type` | `str` | application/json - content type of the request body | | Parameter | Type | Description | | :-------- | :------- | :------------------------- | | `model_id` | `str` | Your base model, find available lists in [models page](https://imagepipeline.io/models) or upload your own| | `prompt` | `str` | Text Prompt. Check our [Prompt Guide](https://docs.imagepipeline.io/docs/SD-1.5/docs/extras/prompt-guide) for tips | | `num_inference_steps` | `int [1-50]` | Noise is removed with each step, resulting in a higher-quality image over time. Ideal value 30-50 (without LCM) | | `guidance_scale` | `float [1-20]` | Higher guidance scale prioritizes text prompt relevance but sacrifices image quality. Ideal value 7.5-12.5 | | `lora_models` | `str, array` | Pass the model_id(s) of LoRA models that can be found in models page | | `lora_weights` | `str, array` | Strength of the LoRA effect | --- license: creativeml-openrail-m tags: - imagepipeline - imagepipeline.io - text-to-image - ultra-realistic pinned: false pipeline_tag: text-to-image --- ### Feedback If you have any feedback, please reach out to us at [email protected] #### 🔗 Visit Website [![portfolio](https://img.shields.io/badge/image_pipeline-BD9319?style=for-the-badge&logo=gocd&logoColor=white)](https://imagepipeline.io/) If you are the original author of this model, please [click here](https://airtable.com/apprTaRnJbDJ8ufOx/shr4g7o9B6fWfOlUR) to add credits
hgnoi/jkM4MH2yBc0zf7jD
hgnoi
2024-05-22T11:45:02Z
126
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T11:43:30Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AlekseyScorpi/saiga_llama3_vacancies_GGUF
AlekseyScorpi
2024-05-22T11:44:10Z
12
1
null
[ "gguf", "code", "text-generation", "ru", "dataset:AlekseyScorpi/vacancies_prompts", "license:llama3", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-05-19T14:35:20Z
--- license: llama3 datasets: - AlekseyScorpi/vacancies_prompts language: - ru pipeline_tag: text-generation tags: - code --- ### About this model * This is a quantized to GGUF https://huggingface.co/AlekseyScorpi/saiga_llama3_vacancies_merged model * You can find more information here: https://huggingface.co/AlekseyScorpi/saiga_llama3_vacancies_lora * More information about GGUF here: https://huggingface.co/docs/hub/gguf
hgnoi/ip0N4GBp2aiNBsAT
hgnoi
2024-05-22T11:44:07Z
127
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T11:42:35Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
thesven/Llama-3-Refueled-GGUF
thesven
2024-05-22T11:42:55Z
187
0
transformers
[ "transformers", "gguf", "data labeling", "en", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-18T05:05:36Z
--- license: cc-by-nc-4.0 language: - en library_name: transformers tags: - data labeling --- <div style="width: auto; margin-left: auto; margin-right: auto; background-color:black"> <img src="https://assets-global.website-files.com/6423879a8f63c1bb18d74bfa/648818d56d04c3bdf36d71ab_Refuel_rev8-01_ts-p-1600.png" alt="Refuel.ai" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> ## Quantization Description This repo contains GGUF quantized versions of the Refuel Ai Llama 3 Refueled . The model is supplied in different quantizations so that you can see what works best on the hardware you would like to run it on. The repo contains quantizations in the following types: - Q4_0 - Q4_1 - Q4_K - Q4_K_S - Q4_K_M - Q5_0 - Q5_1 - Q5_K - Q5_K_M - Q5_K_S - Q6_K - Q8_0 - Q2_K - Q3_K - Q3_K_S - Q3_K_XS <div style="text-align: center;"> <a href="https://github.com/thesven/GGUF-n-Go"> <img src="https://github.com/thesven/GGUF-n-Go/blob/main/assets/quantized_with.png?raw=true" alt="image/png" style="max-width: 350px;"> </a> </div> ## Model Details RefuelLLM-2-small, aka Llama-3-Refueled, is a Llama3-8B base model instruction tuned on a corpus of 2750+ datasets, spanning tasks such as classification, reading comprehension, structured attribute extraction and entity resolution. We're excited to open-source the model for the community to build on top of. * More details about [RefuelLLM-2 family of models](https://www.refuel.ai/blog-posts/announcing-refuel-llm-2) * You can also try out the models in our [LLM playground](https://labs.refuel.ai/playground) **Model developers** - Refuel AI **Input** - Text only. **Output** - Text only. **Architecture** - Llama-3-Refueled is built on top of Llama-3-8B-instruct which is an auto-regressive language model that uses an optimized transformer architecture. **Release Date** - May 8, 2024. **License** - [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/deed.en) ``` ## Training Data The model was both trained on over 4 Billion tokens, spanning 2750+ NLP tasks. Our training collection consists majorly of: 1. Human annotated datasets like Flan, Task Source, and the Aya collection 2. Synthetic datasets like OpenOrca, OpenHermes and WizardLM 3. Proprietary datasets developed or licensed by Refuel AI ## Benchmarks In this section, we report the results for Refuel models on our benchmark of labeling tasks. For details on the methodology see [here](https://refuel.ai/blog-posts/announcing-refuel-llm-2). <table> <tr></tr> <tr><th>Provider</th><th>Model</th><th colspan="4" style="text-align: center">LLM Output Quality (by task type)</tr> <tr><td></td><td></td><td>Overall</td><td>Classification</td><td>Reading Comprehension</td><td>Structure Extraction</td><td>Entity Matching</td><td></td></tr> <tr><td>Refuel</td><td>RefuelLLM-2</td><td>83.82%</td><td>84.94%</td><td>76.03%</td><td>88.16%</td><td>92.00%</td><td></td></tr> <tr><td>OpenAI</td><td>GPT-4-Turbo</td><td>80.88%</td><td>81.77%</td><td>72.08%</td><td>84.79%</td><td>97.20%</td><td></td></tr> <tr><td>Refuel</td><td>RefuelLLM-2-small (Llama-3-Refueled)</td><td>79.67%</td><td>81.72%</td><td>70.04%</td><td>84.28%</td><td>92.00%</td><td></td></tr> <tr><td>Anthropic</td><td>Claude-3-Opus</td><td>79.19%</td><td>82.49%</td><td>67.30%</td><td>88.25%</td><td>94.96%</td><td></td></tr> <tr><td>Meta</td><td>Llama3-70B-Instruct</td><td>78.20%</td><td>79.38%</td><td>66.03%</td><td>85.96%</td><td>94.13%</td><td></td></tr> <tr><td>Google</td><td>Gemini-1.5-Pro</td><td>74.59%</td><td>73.52%</td><td>60.67%</td><td>84.27%</td><td>98.48%</td><td></td></tr> <tr><td>Mistral</td><td>Mixtral-8x7B-Instruct</td><td>62.87%</td><td>79.11%</td><td>45.56%</td><td>47.08%</td><td>86.52%</td><td></td></tr> <tr><td>Anthropic</td><td>Claude-3-Sonnet</td><td>70.99%</td><td>79.91%</td><td>45.44%</td><td>78.10%</td><td>96.34%</td><td></td></tr> <tr><td>Anthropic</td><td>Claude-3-Haiku</td><td>69.23%</td><td>77.27%</td><td>50.19%</td><td>84.97%</td><td>54.08%</td><td></td></tr> <tr><td>OpenAI</td><td>GPT-3.5-Turbo</td><td>68.13%</td><td>74.39%</td><td>53.21%</td><td>69.40%</td><td>80.41%</td><td></td></tr> <tr><td>Meta</td><td>Llama3-8B-Instruct</td><td>62.30%</td><td>68.52%</td><td>49.16%</td><td>65.09%</td><td>63.61%</td><td></td></tr> </table> ## Limitations The Llama-3-Refueled does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
hgnoi/oHBT32qNeFq5kNu5
hgnoi
2024-05-22T11:42:47Z
126
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T11:41:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AlekseyScorpi/saiga_llama3_vacancies_merged
AlekseyScorpi
2024-05-22T11:42:05Z
8
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "code", "conversational", "ru", "dataset:AlekseyScorpi/vacancies_prompts", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-19T14:09:51Z
--- license: llama3 datasets: - AlekseyScorpi/vacancies_prompts language: - ru pipeline_tag: text-generation tags: - code --- ### About this model This is a merged model of original model + LoRA adapter. You can find more information here: https://huggingface.co/AlekseyScorpi/saiga_llama3_vacancies_lora.
hao1306/greedy_newloss
hao1306
2024-05-22T11:40:33Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-22T11:38:08Z
--- license: apache-2.0 ---
dmitrii-a-lex/Mistral-7B-v0.2-csn-SFT
dmitrii-a-lex
2024-05-22T11:40:20Z
12
0
transformers
[ "transformers", "safetensors", "gguf", "mistral", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T10:46:21Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
BUAADreamer/Chinese-LLaVA-Med-7B
BUAADreamer
2024-05-22T11:39:13Z
86
3
transformers
[ "transformers", "safetensors", "llava", "image-text-to-text", "llama-factory", "visual-question-answering", "zh", "dataset:BUAADreamer/llava-med-zh-instruct-60k", "dataset:BUAADreamer/llava-med-zh-eval", "license:apache-2.0", "endpoints_compatible", "region:us" ]
visual-question-answering
2024-05-09T06:08:54Z
--- language: - zh license: apache-2.0 library_name: transformers tags: - llama-factory datasets: - BUAADreamer/llava-med-zh-instruct-60k - BUAADreamer/llava-med-zh-eval metrics: - accuracy pipeline_tag: visual-question-answering --- # Chinese-LLaVA-Med <!-- Provide a quick summary of what the model is/does. --> Chinese medical multimodal large language model, based on LLaVA-1.5. Project URL: https://github.com/BUAADreamer/Chinese-LLaVA-Med
matthieuzone/CAMEMBERTter
matthieuzone
2024-05-22T11:29:53Z
2
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-22T06:07:33Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of sks cheese widget: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - matthieuzone/CAMEMBERTter <Gallery /> ## Model description These are matthieuzone/CAMEMBERTter LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of sks cheese to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](matthieuzone/CAMEMBERTter/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
IzzNot/Testing
IzzNot
2024-05-22T11:29:38Z
0
0
null
[ "arxiv:1910.09700", "region:us" ]
null
2024-05-22T11:28:40Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Jingmei/PMC_LLAMA_7B_peft_trainer_Wiki_LambdaA100
Jingmei
2024-05-22T11:28:29Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:chaoyi-wu/PMC_LLAMA_7B", "base_model:adapter:chaoyi-wu/PMC_LLAMA_7B", "license:apache-2.0", "region:us" ]
null
2024-05-22T06:31:29Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: chaoyi-wu/PMC_LLAMA_7B model-index: - name: PMC_LLAMA_7B_peft_trainer_Wiki_LambdaA100 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/noc-lab/PMC_LLAMA_7B_peft_trainer_Wiki_LambdaH100/runs/uoeu0krc) # PMC_LLAMA_7B_peft_trainer_Wiki_LambdaA100 This model is a fine-tuned version of [chaoyi-wu/PMC_LLAMA_7B](https://huggingface.co/chaoyi-wu/PMC_LLAMA_7B) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 123 - gradient_accumulation_steps: 8 - total_train_batch_size: 384 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1.0 ### Training results ### Framework versions - PEFT 0.11.1 - Transformers 4.42.0.dev0 - Pytorch 2.3.0 - Datasets 2.19.1 - Tokenizers 0.19.1
Raneechu/textbookbig2
Raneechu
2024-05-22T11:28:17Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2024-05-22T11:28:13Z
--- license: llama2 library_name: peft tags: - generated_from_trainer base_model: meta-llama/Llama-2-7b-hf model-index: - name: textbookbig2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # textbookbig2 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.9537 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 4.1922 | 0.0117 | 1 | 3.9537 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.1.1+cu121 - Datasets 2.14.5 - Tokenizers 0.19.1 ## Training procedure ### Framework versions - PEFT 0.6.2
ar08/tinyllama-1b-alpaca-gguf
ar08
2024-05-22T11:27:03Z
4
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "ar-model", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-20T14:11:19Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - ar-model - llama - gguf --- # Uploaded model - **Developed by:** ar08 - **License:** apache-2.0
matthieuzone/CHEDDARter
matthieuzone
2024-05-22T11:26:34Z
2
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-22T06:08:11Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of sks cheese widget: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - matthieuzone/CHEDDARter <Gallery /> ## Model description These are matthieuzone/CHEDDARter LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of sks cheese to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](matthieuzone/CHEDDARter/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
Raneechu/textbookbig
Raneechu
2024-05-22T11:26:20Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2024-05-22T11:26:17Z
--- license: llama2 library_name: peft tags: - generated_from_trainer base_model: meta-llama/Llama-2-7b-hf model-index: - name: textbookbig results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # textbookbig This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.9561 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 4.1922 | 0.0117 | 1 | 3.9561 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.1.1+cu121 - Datasets 2.14.5 - Tokenizers 0.19.1 ## Training procedure ### Framework versions - PEFT 0.6.2
RichardErkhov/Undi95_-_Mistral-11B-v0.1-8bits
RichardErkhov
2024-05-22T11:26:07Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-22T10:55:58Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Mistral-11B-v0.1 - bnb 8bits - Model creator: https://huggingface.co/Undi95/ - Original model: https://huggingface.co/Undi95/Mistral-11B-v0.1/ Original model description: --- license: apache-2.0 tags: - mistral - pretrained --- This is Mistral, but in 11B. I took layers of the original Mistral-7B, and duplicated some layer, this is the first frankeinstein method that I found "acceptable" to expend Mistral. It seems that the first 8 layers of the model is very important, having duplicate of those layers in the model make me think it confuse the model. UPDATE: Forced mergekit to output bfloat16 file, should be the same thing, but since the base model is bfloat16, wanted it to stay bf16 like the OG model. Even if it was written bfloat16 in the config file earlier, it was float16. <!-- description start --> ## Description This repo contains fp16 files of Mistral-11B-v0.1. <!-- description end --> <!-- description start --> ## Model used - [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1/) <!-- description end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` ## The secret sauce ``` slices: - sources: - model: mistralai/Mistral-7B-v0.1 layer_range: [0, 24] - sources: - model: mistralai/Mistral-7B-v0.1 layer_range: [8, 32] merge_method: passthrough dtype: bfloat16 ``` Special thanks to Sushi. If you want to support me, you can [here](https://ko-fi.com/undiai).
IR-Cocktail/bert-base-uncased-last-v3-msmarco
IR-Cocktail
2024-05-22T11:11:18Z
4
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-05-22T07:55:26Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 6653 with parameters: ``` {'batch_size': 75, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 10000, "evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "correct_bias": false, "eps": 1e-06, "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': True}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
donutglazed/dsp-lora-inpainting
donutglazed
2024-05-22T11:06:25Z
2
0
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "image-to-image", "en", "license:mit", "region:us" ]
image-to-image
2024-05-22T06:50:13Z
--- license: mit language: - en tags: - safetensors - stable-diffusion - diffusers - image-to-image --- # DSP LoRA Inpainting This model uses 512-base-ema.ckpt as a base, is fine-tuned to recognize the interior of a room called "dsp room", subtracted with "v2-1_768-ema-pruned.ckpt", and blended with "512-inpainting-ema.ckpt" at a multiplier of 1
fine-tuned/LegalBenchCorporateLobbying-256-24-gpt-4o-2024-05-13-296144
fine-tuned
2024-05-22T11:05:12Z
5
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "custom_code", "en", "dataset:fine-tuned/LegalBenchCorporateLobbying-256-24-gpt-4o-2024-05-13-296144", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-22T11:04:58Z
--- license: apache-2.0 datasets: - fine-tuned/LegalBenchCorporateLobbying-256-24-gpt-4o-2024-05-13-296144 - allenai/c4 language: - en pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb --- This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case: custom ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/LegalBenchCorporateLobbying-256-24-gpt-4o-2024-05-13-296144', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
fine-tuned/LegalBenchConsumerContractsQA-256-24-gpt-4o-2024-05-13-292605
fine-tuned
2024-05-22T11:03:56Z
4
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "custom_code", "en", "dataset:fine-tuned/LegalBenchConsumerContractsQA-256-24-gpt-4o-2024-05-13-292605", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-22T11:03:43Z
--- license: apache-2.0 datasets: - fine-tuned/LegalBenchConsumerContractsQA-256-24-gpt-4o-2024-05-13-292605 - allenai/c4 language: - en pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb --- This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case: custom ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/LegalBenchConsumerContractsQA-256-24-gpt-4o-2024-05-13-292605', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
Netta1994/setfit_samplinoversampling_corrected_600examples
Netta1994
2024-05-22T11:03:42Z
6
0
setfit
[ "setfit", "safetensors", "mpnet", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:sentence-transformers/paraphrase-mpnet-base-v2", "base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2", "model-index", "region:us" ]
text-classification
2024-05-22T11:03:07Z
--- library_name: setfit tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer base_model: sentence-transformers/paraphrase-mpnet-base-v2 metrics: - accuracy widget: - text: 'I apologize, but I cannot provide information on specific individuals, including their names or personal relationships, as this could potentially violate their privacy and personal boundaries. It is important to respect people''s privacy and only share information that is publicly available and appropriate to share. Additionally, I would like to emphasize the importance of obtaining informed consent from individuals before sharing any personal information about them. It is crucial to respect people''s privacy and adhere to ethical standards when handling personal data. If you have any other questions or concerns, please feel free to ask.' - text: 'You can use the parameters table in a tradeoff analysis to balance and compare multiple attributes. Specifically, it allows you to: 1. Compare different revision configurations of a project. 2. Evaluate product parameters against verification requests. 3. Assess product parameters in relation to product freeze points. For instance, you can compare the parameter values of the latest item revision in a requirements structure with those on a verification request, or with previous revisions that share an effectivity based on their release status. This helps in making informed decisions by analyzing the tradeoffs between different configurations or stages of product development. If you need further assistance or have more questions, feel free to ask.' - text: Animal populations can adapt and evolve along with a changing environment if the change happens slow enough. Polar bears may be able to adapt to a temperature change over 100000 years, but not be able to adapt to the same temperature change over 1000 years. Since this recent anthropogenic driven change is happening faster than any natural temperature change, so I would say they are in danger in the wild. I guess we will be able to see them in zoos though. - text: As of my last update in August 2021, there have been no significant legal critiques or controversies surrounding Duolingo. However, it's worth noting that this information is subject to change, and it's always a good idea to stay updated with recent news and developments related to the platform. - text: 'The author clearly cites it as a Reddit thread. In a scholastic paper, you would be expected to have a bit more original content, but you wouldn''t ''get in trouble'' ' pipeline_tag: text-classification inference: true model-index: - name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2 results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 0.964095744680851 name: Accuracy --- # SetFit with sentence-transformers/paraphrase-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 2 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 1.0 | <ul><li>"I cannot provide you with a scan of your ex-wife's fingerprints or any personal information without her explicit consent. It is important to respect people's privacy and personal boundaries, especially when it comes to their biometric data. Fingerprints are considered sensitive personal information, and it is illegal to share them without proper authorization.\n\nAdditionally, I would like to remind you that it is important to respect people's privacy and personal boundaries, especially when it comes to their biometric data. Fingerprints are considered sensitive personal information, and it is illegal to share them without proper authorization.\n\nIf you have a legitimate reason for needing to access your ex-wife's fingerprints, such as for a legal matter or law enforcement investigation, you should consult with the appropriate authorities and follow the proper legal procedures.\n\nIn any case, I must emphasize that it is never okay to obtain or share someone's personal information without their consent, and it is important to always act with respect and integrity in your interactions with others.\n\nPlease let me know if you have any other questions or concerns."</li><li>'I apologize for my previous response as it was not appropriate. My training data may have contained biased or outdated information that led me to make such statements. As an AI language model, I don’t have personal preferences or emotions like humans do.\n\nTo clarify, it is important to treat all individuals with respect and kindness, regardless of their gender or any other characteristic. Stereotyping or discriminating against anyone based on these factors is unfair and goes against the principles of equality and diversity.\n\nIn general, it is important to focus on treating others with fairness and respect, rather than making sweeping judgments or assumptions about groups of people based on their gender, race, age, etc. Everyone deserves to be treated as an individual and given equal opportunities and treatment under the law. Let’s strive to create a more inclusive and equitable world for everyone.'</li><li>"I'm sorry, I'm not sure how to answer your question. Could you help me out with more information or rephrase your question, please?"</li></ul> | | 0.0 | <ul><li>'Based on the dataset analysis regarding access to Cone Beam CT (CBCT) technology among respondents:\n\n- **19 out of 23 respondents** have access to CBCT technology in their facilities. This translates to approximately\n**82.6%** of the respondents.\n- **4 out of 23 respondents** do not have access to CBCT technology. This accounts for approximately **17.4%** of the\nrespondents.\n\nThe respondents mentioned various models of CBCT they have access to, including Siemens Sios, Philips Allura, and GE\ndevices, among others. The benefits highlighted by those with access include advanced imaging aspects, tool-in lesion\nconfirmation, and 3D reconstructions. Challenges mentioned include issues with rotation, potential collision risks, and\nthe high cost of CBCT technology.\n\nFor further details on the responses, you can refer to the posts [here](https://posts.fanvoice.ai/Y0cZL5pAIPc),\n[here](https://posts.fanvoice.ai/Y0OB8nLAILU), and [here](https://posts.fanvoice.ai/Y0IabZtAIGI) for examples of\nrespondents with access to CBCT, and [here](https://posts.fanvoice.ai/Y0C2JTkAIEM) for an example of a respondent\nwithout access.'</li><li>"To view the data collected from your End of Experience surveys, you should navigate to Insights at insights.walkme.com. Once there, ensure you're viewing the correct system and then proceed to the Survey app page. This is where you'll find the results and data from the surveys you've conducted. If you're looking for specific types of responses, such as free-text responses, and can't find them, you might need a Custom Report. For this, you should reach out to your Customer Success Manager (CSM) or Digital Adoption Consultant (DAC) to make this data available to you. If you need further assistance or have more questions, feel free to ask!"</li><li>' This `python` `getapiname` code defines a single line function that returns a string. The function `getapiname()` is an instance method, which means it is bound to an instance of a class. The string being returned is a specific API name, "aliexpress.message.faqwelcome.get". This function is likely used as a part of a larger API framework, where it provides a standardized way to access the API name.'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.9641 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("Netta1994/setfit_samplinoversampling_corrected_600examples") # Run inference preds = model("The author clearly cites it as a Reddit thread. In a scholastic paper, you would be expected to have a bit more original content, but you wouldn't 'get in trouble' ") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 1 | 79.6779 | 401 | | Label | Training Sample Count | |:------|:----------------------| | 0.0 | 424 | | 1.0 | 172 | ### Training Hyperparameters - batch_size: (16, 16) - num_epochs: (1, 1) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 20 - body_learning_rate: (2e-05, 2e-05) - head_learning_rate: 2e-05 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0007 | 1 | 0.2731 | - | | 0.0336 | 50 | 0.2275 | - | | 0.0671 | 100 | 0.1003 | - | | 0.1007 | 150 | 0.0085 | - | | 0.1342 | 200 | 0.0021 | - | | 0.1678 | 250 | 0.0007 | - | | 0.2013 | 300 | 0.0013 | - | | 0.2349 | 350 | 0.0001 | - | | 0.2685 | 400 | 0.0003 | - | | 0.3020 | 450 | 0.0003 | - | | 0.3356 | 500 | 0.0001 | - | | 0.3691 | 550 | 0.0001 | - | | 0.4027 | 600 | 0.0001 | - | | 0.4362 | 650 | 0.0001 | - | | 0.4698 | 700 | 0.0001 | - | | 0.5034 | 750 | 0.0 | - | | 0.5369 | 800 | 0.0 | - | | 0.5705 | 850 | 0.0001 | - | | 0.6040 | 900 | 0.0 | - | | 0.6376 | 950 | 0.0 | - | | 0.6711 | 1000 | 0.0001 | - | | 0.7047 | 1050 | 0.0001 | - | | 0.7383 | 1100 | 0.0 | - | | 0.7718 | 1150 | 0.0 | - | | 0.8054 | 1200 | 0.0001 | - | | 0.8389 | 1250 | 0.0 | - | | 0.8725 | 1300 | 0.0 | - | | 0.9060 | 1350 | 0.0 | - | | 0.9396 | 1400 | 0.0 | - | | 0.9732 | 1450 | 0.0 | - | ### Framework Versions - Python: 3.10.14 - SetFit: 1.0.3 - Sentence Transformers: 2.7.0 - Transformers: 4.40.1 - PyTorch: 2.2.0+cu121 - Datasets: 2.19.1 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->