modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-04 18:27:18
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
468 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-04 18:26:45
card
stringlengths
11
1.01M
FINETUNERMYSTRAL/mmistral-supervised-ft-1epochs
FINETUNERMYSTRAL
2024-02-03T09:44:25Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-02-03T09:41:07Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
s3nh/zephyr-speakleash-007-pl-8192-32-16-0.05-GGUF
s3nh
2024-02-03T09:42:00Z
4
1
transformers
[ "transformers", "gguf", "text-generation", "zh", "en", "license:openrail", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-02-03T09:12:31Z
--- license: openrail pipeline_tag: text-generation library_name: transformers language: - zh - en --- ## Original model card Buy me a coffee if you like this project ;) <a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a> #### Description GGUF Format model files for [This project](https://huggingface.co/Nondzu/zephyr-speakleash-007-pl-8192-32-16-0.05). ### GGUF Specs GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired: Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information. Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models. mmap compatibility: models can be loaded using mmap for fast loading and saving. Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used. Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user. The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values. This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for inference or for identifying the model. ### inference User: Tell me story about what is an quantization and what do we need to build. I have a little bit of experience with the term “quantization” from physics, but not much. When I hear it, the first thing that comes to mind is some kind of measuring instrument, like a ruler or voltmeter. What does the phrase “quantized by 1024” mean? It sounds more mathematical than physical. The term quantization comes from quantum mechanics and refers to a process whereby a continuous function is approximated by discrete values, that is, it is “discretized”. In this sense, we can say that the “quanta” are the differences between adjacent # Original model card
vj1148/codellama2-finetuned-codex-fin-bcode
vj1148
2024-02-03T09:41:48Z
1
0
peft
[ "peft", "safetensors", "llama", "arxiv:1910.09700", "base_model:codellama/CodeLlama-7b-hf", "base_model:adapter:codellama/CodeLlama-7b-hf", "region:us" ]
null
2024-01-31T19:02:17Z
--- library_name: peft base_model: codellama/CodeLlama-7b-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.8.2
Adeptschneider/merged-fine-tuned-Llama2
Adeptschneider
2024-02-03T09:38:38Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-03T09:34:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Adeptschneider/fine-tuned-Llama2
Adeptschneider
2024-02-03T09:15:36Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-03T09:15:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
drakrig/Cartpole-v1-policy-gradient
drakrig
2024-02-03T09:12:25Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-02-03T09:12:16Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Cartpole-v1-policy-gradient results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
saheedsanni/distilbert-base-uncased-finetuned-cola
saheedsanni
2024-02-03T09:02:10Z
1
0
transformers
[ "transformers", "tf", "tensorboard", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-03T09:01:18Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: saheedsanni/distilbert-base-uncased-finetuned-cola results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # saheedsanni/distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5203 - Validation Loss: 0.4792 - Train Matthews Correlation: 0.4572 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Matthews Correlation | Epoch | |:----------:|:---------------:|:--------------------------:|:-----:| | 0.5203 | 0.4792 | 0.4572 | 0 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.10.1 - Datasets 2.16.1 - Tokenizers 0.13.3
dagbs/deepseek-coder-7b-base-v1.5-GGUF
dagbs
2024-02-03T08:58:36Z
43
2
null
[ "gguf", "base_model:deepseek-ai/deepseek-coder-7b-base-v1.5", "base_model:quantized:deepseek-ai/deepseek-coder-7b-base-v1.5", "license:other", "endpoints_compatible", "region:us" ]
null
2024-02-03T04:07:00Z
--- license: other license_name: deepseek-license license_link: >- https://huggingface.co/deepseek-ai/deepseek-coder-7b-base-v1.5/blob/main/LICENSE base_model: deepseek-ai/deepseek-coder-7b-base-v1.5 quantized_by: dagbs --- # deepseek-coder-7b-base-v1.5 - GGUF - Model organization: [DeepSeek](https://huggingface.co/deepseek-ai) - Original model: [deepseek-ai/deepseek-coder-7b-base-v1.5](https://huggingface.co/deepseek-ai/deepseek-coder-7b-base-v1.5) F16 converted using llama.cpp convert.py with the following arguments * --pad-vocab * --vocab-type bpe
orangeoceans/llama-2-7b-minima-morallma
orangeoceans
2024-02-03T08:46:00Z
4
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "arxiv:2305.14314", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-03T07:43:37Z
## minima-moraLLMa A LLaMA 2 7B finetune which generates aphorisms that sound like philosopher Theodor Adorno. This was trained on the ~150 aphorisms from Adorno's Minima Moralia, which each contain a title and some body text, using the [4-bit QLoRA approach](https://arxiv.org/abs/2305.14314). Run it with the following prompt: ``` <s>[INST] <<SYS>>You are Theodor Adorno. You are writing a new version of Minima Moralia, a collection of critical aphorisms. Here is one such aphorism.<</SYS>> Some Topic [/INST] ``` As with Minima Moralia itself, the resulting aphorism may not be directly on topic!
boruyang/a2c-PandaReachDense-v3
boruyang
2024-02-03T08:31:26Z
1
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-02-03T08:27:00Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v3 type: PandaReachDense-v3 metrics: - type: mean_reward value: -0.26 +/- 0.10 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v3** This is a trained model of a **A2C** agent playing **PandaReachDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
zzzghttt/CodeLlama-7b-Test-Instruct-lora
zzzghttt
2024-02-03T08:30:57Z
2
0
peft
[ "peft", "region:us" ]
null
2023-12-30T18:34:36Z
--- library_name: peft --- # CodeLlama-7b-Test-Instruct-lora ## Description This repo contains a low-rank adapter for [CodeLlama-7b-Instruct](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) fit on the [zzzghttt/code2test](https://huggingface.co/datasets/zzzghttt/code2test) dataset. The Lora model is primarily aimed at generating high-quality unit tests in Java. ### How to use See [ChatUniTest Models](https://github.com/ZJU-ACES-ISE/chatunitest-models) ## Training data [zzzghttt/code2test](https://huggingface.co/datasets/zzzghttt/code2test) ## Training procedure This version of the weights was trained with the following hyperparameters: - batch_size: 128 - micro_batch_size: 4 - num_epochs: 3 (load from best epoch) - learning_rate: 3e-4 - cutoff_len: 2048 - lora_r: 64 - lora_alpha: 16 - lora_dropout: 0.05 - lora_target_modules: ['q_proj', 'v_proj'] The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.5.0
zzzghttt/TestGen2-lora
zzzghttt
2024-02-03T08:29:49Z
3
0
peft
[ "peft", "region:us" ]
null
2024-02-03T05:52:58Z
--- library_name: peft --- # CodeLlama-7b-Test-Instruct-lora ## Description This repo contains a low-rank adapter for [CodeLlama-7b-Instruct](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) fit on the [zzzghttt/context2test](https://huggingface.co/datasets/zzzghttt/context2test) dataset. The Lora model is primarily aimed at generating high-quality unit tests in Java. ### How to use See [ChatUniTest Models](https://github.com/ZJU-ACES-ISE/chatunitest-models) ## Training data [zzzghttt/context2test](https://huggingface.co/datasets/zzzghttt/context2test) ## Training procedure This version of the weights was trained with the following hyperparameters: - batch_size: 128 - micro_batch_size: 4 - num_epochs: 3 (load from best epoch) - learning_rate: 3e-4 - cutoff_len: 2048 - lora_r: 64 - lora_alpha: 16 - lora_dropout: 0.05 - lora_target_modules: ['q_proj', 'v_proj'] The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.5.0
rhplus0831/maid-yuzu-v2-mid-exl2-6.0bpw-rpcal
rhplus0831
2024-02-03T08:27:05Z
6
2
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "mergekit", "merge", "base_model:smelborp/MixtralOrochi8x7B", "base_model:merge:smelborp/MixtralOrochi8x7B", "base_model:ycros/BagelMIsteryTour-v2-8x7B", "base_model:merge:ycros/BagelMIsteryTour-v2-8x7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-03T08:20:35Z
--- base_model: - smelborp/MixtralOrochi8x7B - ycros/BagelMIsteryTour-v2-8x7B library_name: transformers tags: - mergekit - merge --- # maid-yuzu-v2-mid This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [smelborp/MixtralOrochi8x7B](https://huggingface.co/smelborp/MixtralOrochi8x7B) * [ycros/BagelMIsteryTour-v2-8x7B](https://huggingface.co/ycros/BagelMIsteryTour-v2-8x7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: model: path: smelborp/MixtralOrochi8x7B dtype: bfloat16 merge_method: slerp parameters: t: - value: 0.375 slices: - sources: - layer_range: [0, 32] model: model: path: smelborp/MixtralOrochi8x7B - layer_range: [0, 32] model: model: path: ycros/BagelMIsteryTour-v2-8x7B ```
Dhanraj1503/LunarLander-ppo
Dhanraj1503
2024-02-03T07:55:12Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2024-02-03T07:55:07Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -136.24 +/- 61.88 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'colab-experiment' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'Dhanraj1503/LunarLander-ppo' 'batch_size': 512 'minibatch_size': 128} ```
CLMBR/re-irr-sv-agr-transformer-1
CLMBR
2024-02-03T07:49:29Z
4
0
transformers
[ "transformers", "pytorch", "opt", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-25T09:56:43Z
--- tags: - generated_from_trainer model-index: - name: re-irr-sv-agr-transformer-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # re-irr-sv-agr-transformer-1 This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.8917 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 1 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 3052726 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-------:|:---------------:| | 4.2191 | 0.03 | 76320 | 4.2133 | | 4.0124 | 1.03 | 152640 | 4.0446 | | 3.9042 | 0.03 | 228960 | 3.9689 | | 3.8378 | 1.03 | 305280 | 3.9282 | | 3.7862 | 0.03 | 381600 | 3.9036 | | 3.7465 | 1.03 | 457920 | 3.8875 | | 3.7125 | 0.03 | 534240 | 3.8780 | | 3.6811 | 1.03 | 610560 | 3.8712 | | 3.6533 | 0.03 | 686880 | 3.8683 | | 3.6278 | 1.03 | 763200 | 3.8661 | | 3.604 | 0.03 | 839520 | 3.8653 | | 3.5878 | 1.03 | 915840 | 3.8643 | | 3.5705 | 0.03 | 992160 | 3.8659 | | 3.5519 | 0.03 | 1068480 | 3.8674 | | 3.5332 | 0.03 | 1144800 | 3.8693 | | 3.516 | 1.03 | 1221120 | 3.8696 | | 3.498 | 0.03 | 1297440 | 3.8707 | | 3.4839 | 1.03 | 1373760 | 3.8720 | | 3.4693 | 0.03 | 1450080 | 3.8750 | | 3.4632 | 1.03 | 1526400 | 3.8761 | | 3.4533 | 0.03 | 1602720 | 3.8784 | | 3.4476 | 1.03 | 1679040 | 3.8794 | | 3.4382 | 0.03 | 1755360 | 3.8807 | | 3.4264 | 1.03 | 1831680 | 3.8814 | | 3.4151 | 0.03 | 1908000 | 3.8848 | | 3.4026 | 1.03 | 1984320 | 3.8861 | | 3.3883 | 0.03 | 2060640 | 3.8874 | | 3.3828 | 1.03 | 2136960 | 3.8885 | | 3.376 | 0.03 | 2213280 | 3.8899 | | 3.3616 | 1.03 | 2289600 | 3.8903 | | 3.3522 | 0.03 | 2365920 | 3.8921 | | 3.3376 | 0.03 | 2442240 | 3.8915 | | 3.3228 | 0.03 | 2518560 | 3.8923 | | 3.3132 | 1.03 | 2594880 | 3.8935 | | 3.3038 | 0.03 | 2671200 | 3.8945 | | 3.2999 | 0.03 | 2747520 | 3.8946 | | 3.2939 | 0.03 | 2823840 | 3.8947 | | 3.2922 | 1.03 | 2900160 | 3.8938 | | 3.2867 | 0.03 | 2976480 | 3.8927 | | 3.2797 | 1.02 | 3052726 | 3.8917 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.13.3
saishf/HuginnV5.6-12.6B-GGUF
saishf
2024-02-03T07:39:22Z
9
1
null
[ "gguf", "text-generation", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
text-generation
2024-02-03T04:24:55Z
--- license: cc-by-4.0 pipeline_tag: text-generation --- GGUF quants for Huginn V5.6: https://huggingface.co/The-Face-Of-Goonery/HuginnV5.5-12.6B (Read the disclaimer)
Ngoctho/Chigiri
Ngoctho
2024-02-03T07:25:30Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2024-02-03T07:25:27Z
--- license: bigscience-openrail-m ---
fterry/FofoNet-CatDolphin-PPT-slerp
fterry
2024-02-03T07:22:25Z
7
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "rishiraj/CatPPT-base", "HenryJJ/dolphin-2.6-mistral-7b-dpo-orca-v2", "base_model:HenryJJ/dolphin-2.6-mistral-7b-dpo-orca-v2", "base_model:merge:HenryJJ/dolphin-2.6-mistral-7b-dpo-orca-v2", "base_model:rishiraj/CatPPT-base", "base_model:merge:rishiraj/CatPPT-base", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-03T07:17:20Z
--- tags: - merge - mergekit - lazymergekit - rishiraj/CatPPT-base - HenryJJ/dolphin-2.6-mistral-7b-dpo-orca-v2 base_model: - rishiraj/CatPPT-base - HenryJJ/dolphin-2.6-mistral-7b-dpo-orca-v2 --- # FofoNet-CatDolphin-PPT-slerp FofoNet-CatDolphin-PPT-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [rishiraj/CatPPT-base](https://huggingface.co/rishiraj/CatPPT-base) * [HenryJJ/dolphin-2.6-mistral-7b-dpo-orca-v2](https://huggingface.co/HenryJJ/dolphin-2.6-mistral-7b-dpo-orca-v2) ## 🧩 Configuration ```yaml slices: - sources: - model: rishiraj/CatPPT-base layer_range: [0, 32] - model: HenryJJ/dolphin-2.6-mistral-7b-dpo-orca-v2 layer_range: [0, 32] merge_method: slerp base_model: rishiraj/CatPPT-base parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "fterry/FofoNet-CatDolphin-PPT-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
cecb/newsfinetune_mistral_full_03022024
cecb
2024-02-03T07:22:10Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-02-03T07:20:23Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jeiku/Soul_3B
jeiku
2024-02-03T06:51:40Z
4
0
transformers
[ "transformers", "safetensors", "stablelm_epoch", "text-generation", "mergekit", "merge", "conversational", "custom_code", "arxiv:2212.04089", "base_model:jeiku/Futa_Erotica_StableLM", "base_model:merge:jeiku/Futa_Erotica_StableLM", "base_model:jeiku/Gnosis_256_StableLM", "base_model:merge:jeiku/Gnosis_256_StableLM", "base_model:jeiku/Humiliation_StableLM", "base_model:merge:jeiku/Humiliation_StableLM", "base_model:jeiku/LimaRP_StableLM", "base_model:merge:jeiku/LimaRP_StableLM", "base_model:jeiku/Rosa_v1_3B", "base_model:merge:jeiku/Rosa_v1_3B", "base_model:jeiku/Theory_of_Mind_128_StableLM", "base_model:merge:jeiku/Theory_of_Mind_128_StableLM", "autotrain_compatible", "region:us" ]
text-generation
2024-02-03T06:43:58Z
--- base_model: - jeiku/Rosa_v1_3B - jeiku/LimaRP_StableLM - jeiku/Rosa_v1_3B - jeiku/Gnosis_256_StableLM - jeiku/Rosa_v1_3B - jeiku/Rosa_v1_3B - jeiku/Humiliation_StableLM - jeiku/Rosa_v1_3B - jeiku/Futa_Erotica_StableLM - jeiku/Rosa_v1_3B - jeiku/Theory_of_Mind_128_StableLM library_name: transformers tags: - mergekit - merge --- # fatality This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [jeiku/Rosa_v1_3B](https://huggingface.co/jeiku/Rosa_v1_3B) as a base. ### Models Merged The following models were included in the merge: * [jeiku/Rosa_v1_3B](https://huggingface.co/jeiku/Rosa_v1_3B) + [jeiku/LimaRP_StableLM](https://huggingface.co/jeiku/LimaRP_StableLM) * [jeiku/Rosa_v1_3B](https://huggingface.co/jeiku/Rosa_v1_3B) + [jeiku/Gnosis_256_StableLM](https://huggingface.co/jeiku/Gnosis_256_StableLM) * [jeiku/Rosa_v1_3B](https://huggingface.co/jeiku/Rosa_v1_3B) + [jeiku/Humiliation_StableLM](https://huggingface.co/jeiku/Humiliation_StableLM) * [jeiku/Rosa_v1_3B](https://huggingface.co/jeiku/Rosa_v1_3B) + [jeiku/Futa_Erotica_StableLM](https://huggingface.co/jeiku/Futa_Erotica_StableLM) * [jeiku/Rosa_v1_3B](https://huggingface.co/jeiku/Rosa_v1_3B) + [jeiku/Theory_of_Mind_128_StableLM](https://huggingface.co/jeiku/Theory_of_Mind_128_StableLM) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: task_arithmetic base_model: jeiku/Rosa_v1_3B models: - model: jeiku/Rosa_v1_3B+jeiku/Futa_Erotica_StableLM parameters: weight: 0.75 - model: jeiku/Rosa_v1_3B+jeiku/Gnosis_256_StableLM parameters: weight: 0.95 - model: jeiku/Rosa_v1_3B+jeiku/Humiliation_StableLM parameters: weight: 0.5 - model: jeiku/Rosa_v1_3B+jeiku/Theory_of_Mind_128_StableLM parameters: weight: 0.75 - model: jeiku/Rosa_v1_3B+jeiku/LimaRP_StableLM parameters: weight: 0.65 dtype: float16 ```
r0in/Reinforce-Pixelcopter-PLE-v0-c1
r0in
2024-02-03T06:46:20Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-02-03T06:45:32Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0-c1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 23.10 +/- 13.23 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
albinkroby/my-pet-dog-xgz
albinkroby
2024-02-03T06:40:45Z
4
2
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-12T11:47:14Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-Pet-Dog-xgz Dreambooth model trained by albinkroby following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: AJCE360 Sample pictures of this concept: ![0](https://huggingface.co/mrwhog/my-pet-dog-xgz/resolve/main/sample_images/xzg_(7).jpg)
jeiku/Furry_Request_3B
jeiku
2024-02-03T06:32:34Z
6
1
transformers
[ "transformers", "safetensors", "stablelm_epoch", "text-generation", "mergekit", "merge", "conversational", "custom_code", "arxiv:2203.05482", "base_model:jeiku/Furry_Request_StableLM", "base_model:merge:jeiku/Furry_Request_StableLM", "base_model:jeiku/Rosa_v1_3B", "base_model:merge:jeiku/Rosa_v1_3B", "autotrain_compatible", "region:us" ]
text-generation
2024-02-03T06:24:47Z
--- base_model: - jeiku/Rosa_v1_3B - jeiku/Furry_Request_StableLM library_name: transformers tags: - mergekit - merge --- # Furry This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method. ### Models Merged The following models were included in the merge: * [jeiku/Rosa_v1_3B](https://huggingface.co/jeiku/Rosa_v1_3B) + [jeiku/Furry_Request_StableLM](https://huggingface.co/jeiku/Furry_Request_StableLM) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: linear models: - model: jeiku/Rosa_v1_3B+jeiku/Furry_Request_StableLM parameters: weight: 1 dtype: float16 ```
LoneStriker/Blue-Orchid-2x7b-GPTQ
LoneStriker
2024-02-03T06:30:23Z
58
1
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-03T06:27:16Z
--- license: apache-2.0 --- **Blue-Orchid-2x7b** GGUF: https://huggingface.co/nakodanei/Blue-Orchid-2x7b_GGUF Roleplaying focused MoE Mistral model. One expert is a merge of mostly RP models, the other is a merge of mostly storywriting models. So it should be good at both. The base model is SanjiWatsuki/Kunoichi-DPO-v2-7B. - Expert 1 is a merge of LimaRP, Limamono, Noromaid 0.4 DPO and good-robot. - Expert 2 is a merge of Erebus, Holodeck, Dans-AdventurousWinds-Mk2, Opus, Ashhwriter and good-robot. ## Prompt template (LimaRP): ``` ### Instruction: {system prompt} ### Input: User: {prompt} ### Response: Character: ``` Alpaca prompt template should work fine too.
Gigazinie/240203_QA_model
Gigazinie
2024-02-03T06:28:23Z
23
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "question-answering", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2024-02-03T05:39:54Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: 240203_QA_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 240203_QA_model This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.6866 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 250 | 3.2768 | | 3.3591 | 2.0 | 500 | 2.7866 | | 3.3591 | 3.0 | 750 | 2.6866 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
Yazawa/donut_240202
Yazawa
2024-02-03T06:20:33Z
89
0
transformers
[ "transformers", "tensorboard", "safetensors", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "dataset:imagefolder", "base_model:Yazawa/donut-base-sroie", "base_model:finetune:Yazawa/donut-base-sroie", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-02-02T08:07:37Z
--- license: mit base_model: Yazawa/donut-base-sroie tags: - generated_from_trainer datasets: - imagefolder model-index: - name: donut_240202 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut_240202 This model is a fine-tuned version of [Yazawa/donut-base-sroie](https://huggingface.co/Yazawa/donut-base-sroie) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.38.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.16.1 - Tokenizers 0.15.1
Vasanth/Beast-Soul-new
Vasanth
2024-02-03T06:19:40Z
52
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "udkai/Turdus", "flemmingmiguel/MBX-7B", "base_model:flemmingmiguel/MBX-7B", "base_model:merge:flemmingmiguel/MBX-7B", "base_model:udkai/Turdus", "base_model:merge:udkai/Turdus", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-03T04:20:33Z
--- tags: - merge - mergekit - lazymergekit - udkai/Turdus - flemmingmiguel/MBX-7B base_model: - udkai/Turdus - flemmingmiguel/MBX-7B license: apache-2.0 --- # Beast-Soul-new Beast-Soul-new is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [udkai/Turdus](https://huggingface.co/udkai/Turdus) * [flemmingmiguel/MBX-7B](https://huggingface.co/flemmingmiguel/MBX-7B) ## 🧩 Configuration ```yaml slices: - sources: - model: udkai/Turdus layer_range: [0, 32] - model: flemmingmiguel/MBX-7B layer_range: [0, 32] merge_method: slerp base_model: udkai/Turdus parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Vasanth/Beast-Soul-new" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
Thala007Dhoni/facedeep
Thala007Dhoni
2024-02-03T06:16:21Z
0
0
null
[ "region:us" ]
null
2024-02-03T05:03:32Z
# deepfake-detection Identify the images as real or fake using state-of-the-art AI models
sarthakharne/Phi1_5-PreTrained-3-epoch
sarthakharne
2024-02-03T06:14:18Z
4
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-03T06:11:35Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sarthakharne/Phi1_5-PreTrained-2-epoch
sarthakharne
2024-02-03T06:09:54Z
4
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-03T06:07:35Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sarthakharne/Phi1_5-PreTrained-1-epoch
sarthakharne
2024-02-03T06:04:56Z
4
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-03T06:02:33Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
JahnaviKumar/FGL_DevEmotionAnalysis
JahnaviKumar
2024-02-03T06:00:52Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-03T05:26:38Z
This model is trained on comments from fast-growing programming languages on GitHub. The corresponding paper has been accepted in ICPC'24, for further details on the dataset, methodology, and results, please refer https://doi.org/10.1145/3643916.3644422.
karawalla/aqmodel_20240204
karawalla
2024-02-03T05:53:35Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-03T05:49:34Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
morisato/TRAIN
morisato
2024-02-03T05:52:26Z
0
2
null
[ "ja", "license:unknown", "region:us" ]
null
2024-02-02T13:33:41Z
--- license: unknown language: - ja --- # 鉄道・電車LoRA 世界中の変なLoRA愛好家の皆様、お元気でお過ごしでしょうか?<br> <br> これは「追加学習してみる実験」で作成した鉄道・電車等のLoRAです。<br> 結論から言ってしまうと鉄道車輌や車内の学習はかなり難しく、ここにアップロードしたLoRAは期待したようなイラストを生成することがほぼ出来ません。<br> AI画像生成の「ノイズの海の中からなんとなくぼやっとしたイメージを再現する」という仕組み上では、パースが歪んでしまったり、勝手なアレンジが加えられたりすることがよくあります。<br> 自動車・バイクや鉄道といった乗り物・機械系のイラスト生成は、各々をよく観察し細かな仕様の違いさえ分かってしまう熱心なファンとは相性が良くないジャンルかもしれません。<br> <br> 私達が使っているSD15系モデルには、海外の鉄道車輌等の風景もある程度学習されているようです。<br> プロンプトで"train"や"train interior"(車輌の内装:車内)と書いて画像生成すれば、車輌の外観や車内のイラストが生成できます<br> しかし海外っぽい車輌や車内になってしまうことが多く、日本人の私達が日常で目にするような国内の車輌や車内風景を描くことはできません<br> LoRA等の追加学習で日本の電車でお出かけする風景のイラストを生成することはできるのか?とりあえずやってみようということでチャレンジしたのが別途アップしている山手線235系・阪急3000系の車内風景でした。<br> 結果としては、車内の特徴をある程度学習はしてくれるのですが、窓やドア・シート配置といった規則性は再現できないので、かなり異次元日本の風景になってしまいました<br> <br> そして次に車輌の外観の学習ですが、車内と同じく、特徴をある程度は学習してくれるものの、縦や横に伸びてしまったり変形したり、あるべきパーツが無い・増殖するといった感じで、分かってる方にとってはかなり気持ち悪いイラストになってしまいます。<br> 作成したLoRAで思ったような画像が生成できるかどうかを個人的に「打率」と呼んでいますが、その「打率」が2~3割にも届かない感じです。<br> <br> 大まかな特徴を追加で学習できているので、実際の車輌の写真をControlNet等で参照させれば若干は改善することができますが…<br> 素材を用意しなくてはならないので、結構面倒じゃないかなと思ったりします<br> <br> 「実験してみた」ことで、出来そうな事と改善が難しそうな点がなんとなく分かってきたのが収穫かもしれません。<br> そのような訳で最近作成してみた色々な鉄道関係のLoRAを一旦まとめてみようと思います。皆様の研究・実験のお役に立てば幸いです。<br> <br> <br> ## E233_1000_SD15 ![00005-lametta_v2012 - 4076518618.png](https://cdn-uploads.huggingface.co/production/uploads/63056ac3fca1d8d92b8061a3/1qZFH_1NbrhL0rdOWi-Wu.png) 京浜東北線E233系の車内風景です。シートやドア配置・袖仕切り形状等がことごとく崩壊します。こちらの作例画像では天井の照明の配置や空調のルーバー等のラインデリアの再現もおかしくなっています<br> e233, train interior, scenery, seat, window, door, poster (object) <br> <br> ## E233ex_SD15 ![00005-lametta_v2012 - 1430238489.png](https://cdn-uploads.huggingface.co/production/uploads/63056ac3fca1d8d92b8061a3/bY9DUcLE3DpxnFab8i0_7.png) 京浜東北線E233系の外観です。微妙に変形・崩壊した知らない子が多々生成されます<br> e233, exterior, train, train station, railroad tracks, scenery, outdoors, day, real world location, <br> <br> ## E235_SD15_V6 ![00000-lametta_v2012 - 4037457639.png](https://cdn-uploads.huggingface.co/production/uploads/63056ac3fca1d8d92b8061a3/pvwxnxrVRJSMOXL9sCjP6.png) 山手線E235系の車内です。シートやドア・スタンションポール・窓上サイネージモニターの配置・袖仕切り形状、ことごとく崩壊します。<br> e235, train interior, scenery, seat, reflection, window, reflective floor, poster (object), realistic, <br> <br> ## Hanshin5000 ![00001-lametta_v2012 - 469739754.png](https://cdn-uploads.huggingface.co/production/uploads/63056ac3fca1d8d92b8061a3/PfhCXZ_g4SOwa04lo789d.png) 阪神のジェットカー5001形の外観です。やわらか5001形になって変形しがちです<br> Hanshin5000, scenery, railroad tracks, train station, outdoors, train, real world location, power lines, <br> <br> ## JNR205ex_SD15 ![00008-lametta_v2012 - 4059319666.png](https://cdn-uploads.huggingface.co/production/uploads/63056ac3fca1d8d92b8061a3/2yPwN0CgQRYLXn9620P-X.png) 埼京線205系の外観です。異世界205系が多々生成されます<br> JNR205, train, railroad tracks, scenery, real world location, outdoors, realistic, photo background, building, power lines, headlight<br> JNR205, train, train station, railroad tracks, scenery, real world location, outdoors, day, ceiling, ceiling light, tail lights<br> <br> ## JNR12kei_SD15 ![00002-lametta_v2012 - 1414678840.png](https://cdn-uploads.huggingface.co/production/uploads/63056ac3fca1d8d92b8061a3/0qXqvMm9x4KNeIxM5XALI.png) 旧国鉄12系客車の車内です。シート配置とかめちゃくちゃになりがちです<br> 12kei, aisle, train interior, scenery, window, seat, ceiling light, indoors, sunlight, reflective floor<br> <br> ## JNR_SUHA43_SD15 ![00001-lametta_v2012 - 3193343041.png](https://cdn-uploads.huggingface.co/production/uploads/63056ac3fca1d8d92b8061a3/rcSaNz6gnvd6mBb4XyfYy.png) 旧国鉄スハ43・スハフ42等の客車の車内・外観です。シート配置がめちゃくちゃになりがちです<br> suha43aisle, train interior, scenery, seat, window, sunlight, ceiling, ceiling light, indoors<br> suha43, railroad tracks, train station, train, scenery, outdoors, day, tree, real world location<br> <br> ## JNR_SUHA43W_SD15 ![00001-lametta_v2012 - 3471792112.png](https://cdn-uploads.huggingface.co/production/uploads/63056ac3fca1d8d92b8061a3/Tt-KhWAF8uMzEDqVC6cTk.png) 旅の風景のイラストを作れないかとスハ43・スハフ42等の客車の通路から窓側のボックスシートを見た風景だけを学習してみたものです。<br> モデル自体が持つ"train interior"のタグは車輌進行方向の視点が強く学習されているのでうまくいきませんでした。<br> suha43window, train interior, scenery, seat, window, shadow, day, sunlight, door, indoors<br> <br> ## JNR_oha35_SD15 ![00000-lametta_v2012 - 3668599625.png](https://cdn-uploads.huggingface.co/production/uploads/63056ac3fca1d8d92b8061a3/tzS9CzmthaiTsp8gr6zZM.png) 旧国鉄オハ35客車の車内です。スハ43より若干古い時代の車輌で近代化更新があまりなされていない木製の車内風景を学習させてみたものです。シート配置がめちゃくちゃになりがちです<br> oha35, train interior, scenery, window, indoors, sunlight, chair, ceiling, ceiling light, wooden floor<br> <br> ## oha35_deck_SD15 ![00012-lametta_v2012 - 185322511.png](https://cdn-uploads.huggingface.co/production/uploads/63056ac3fca1d8d92b8061a3/GBn45IZEQ_BoTULsjHdJ0.png) 旧客のデッキ付近の風景をイラストにできないかと作ってみたものです。うまくいきません。海外っぽくなったり家になったり崩壊しまくります<br> kyukyaku, vestibule, train interior, scenery, door, indoors, ceiling light, wooden floor, train<br> kyukyaku, scenery, train station, railroad tracks, day, outdoors, door, window, sign, sunlight, train, vestibule, outdoors<br> kyukyaku, train interior, scenery, vestibule, building, train station, power lines, outdoors, door, window<br> <br> ## 大阪環状線103系 ![00001-lametta_v2012 - 2937007514.png](https://cdn-uploads.huggingface.co/production/uploads/63056ac3fca1d8d92b8061a3/bpAMZ43Yn5-xTHb8v7kxR.png) 大阪環状線を走っていた103系です。JR西日本の103系延命40N更新車は原型からはかなり変化していました。生成画像はそれ以上に原型からかけ離れた103系になりがちです<br> JRE103, train, train station, railroad tracks, outdoors, real world location, photo background, 1boy, realistic, standing, scenery, headlight<br> JRE103, train, train station, railroad tracks, multiple boys, vehicle focus, scenery, tail lights<br> <br> ## 大阪環状線201系 ![00000-lametta_v2012 - 3076100390.png](https://cdn-uploads.huggingface.co/production/uploads/63056ac3fca1d8d92b8061a3/FzUw_5MHqNjeRd-EKdzUT.png) 大阪環状線を走っていた201系です。異世界の201系が生成されがちです<br> JRE201, train, night, train station, scenery, outdoors, building, railroad tracks, headlight<br> JRE201, train, train station, railroad tracks, scenery, vehicle focus, outdoors, tail lights<br> <br> ## 大阪環状線323系 ![00005-lametta_v2012 - 522113296.png](https://cdn-uploads.huggingface.co/production/uploads/63056ac3fca1d8d92b8061a3/8W7ndNWngbDINWnzhq7Zx.png) 現在大阪環状線の主力となっている323系です。異世界323系が生成されがちです<br> JRE323, train, train station, pants, multiple boys, backpack, bag, railroad tracks, multiple girls, shoes, scenery, real world location, standing, headlight<br> JRE323, train, train station, railroad tracks, scenery, outdoors, real world location, tail lights<br> <br> ## OsakaMetro10A ![00003-lametta_v2012 - 1619719219.png](https://cdn-uploads.huggingface.co/production/uploads/63056ac3fca1d8d92b8061a3/Q2fmWMCgIflXILFLT81lX.png) 大阪市交通局(現:大阪市高速電気軌道/大阪メトロ)でかつて御堂筋線を走っていた10A系です。異世界10A系になりがちです<br> OsakaMetro10A, subway station, train station, train, multiple boys, bag, real world location, multiple girls, railroad tracks, pants, 6+boys, black hair, rolling suitcase, holding, outdoors, tail lights<br> OsakaMetro10A, subway station, train station, train, railroad tracks, hat, 1boy, scenery, realistic, uniform, railroad worker, outdoors, tail lights<br> <br> ## OsakaMetro20 ![00004-lametta_v2012 - 2159394710.png](https://cdn-uploads.huggingface.co/production/uploads/63056ac3fca1d8d92b8061a3/dlur0s6MnKNowf4YGlvCd.png) 大阪メトロ中央線20系です。異世界20系になりがちです<br> OsakaMetro20, subway station, train, train station, scenery, railroad tracks, ceiling, ceiling light, headlight<br> OsakaMetro20, subway station, train, train station, multiple boys, railroad tracks, real world location, multiple girls, scenery, ceiling, ceiling light, tail lights, headlight<br> <br> ## OsakaMetro21 ![00005-lametta_v2012 - 594479983.png](https://cdn-uploads.huggingface.co/production/uploads/63056ac3fca1d8d92b8061a3/yKxAIwhRUjZswUJqizf-q.png) 大阪メトロ御堂筋線21系です。異世界21系になりがちです<br> OsakaMetro21, subway station, train, train station, railroad tracks, scenery, real world location, outdoors, ceiling, ceiling light, headlight<br> OsakaMetro21, subway station, train, train station, scenery, ceiling, tail lights<br> <br> ## OsakaMetro22 ![00009-lametta_v2012 - 3078272944.png](https://cdn-uploads.huggingface.co/production/uploads/63056ac3fca1d8d92b8061a3/ZO-m43qGq6B-IYcmkAr7F.png) 大阪メトロ谷町線22系です。異世界22系になりがちです<br> OsakaMetro22, subway station, train, train station, multiple girls, pants, bag, 1boy, railroad tracks, multiple boys, ceiling, ceiling light, headlight<br> OsakaMetro22, subway station, train, train station, multiple boys, 6+boys, hat, real world location, scenery, shirt, night, pants, gloves, bag, holding, white shirt, uniform, railroad worker, ceiling, ceiling light, tail lights<br> <br> ## OsakaMetro66 ![00002-lametta_v2012 - 3901540579.png](https://cdn-uploads.huggingface.co/production/uploads/63056ac3fca1d8d92b8061a3/Y7Zn1Epem7goLAnObbaIb.png) 大阪メトロ堺筋線66系です。異世界66系になりがちです<br> OsakaMetro66, subway station, train, scenery, train station, outdoors, ceiling, headlight<br> OsakaMetro66, subway station, train, scenery, train station, tiles, tile floor, door, ceiling, ceiling light, tail lights<br> <br> ## OsakaMetro70 ![00001-lametta_v2012 - 562878862.png](https://cdn-uploads.huggingface.co/production/uploads/63056ac3fca1d8d92b8061a3/BALD_kKIVYCoOXwMSAokb.png) 大阪メトロ長堀鶴見緑地線70系です。異世界70系になりがちです<br> OsakaMetro70, subway station, train, scenery, train station, night, ceiling, ceiling light, headlight<br> OsakaMetro70, subway station, train, train station, railroad tracks, scenery, outdoors, ceiling, ceiling light, taillight<br> <br> ## OsakaMetro80 ![00002-lametta_v2012 - 4237570337.png](https://cdn-uploads.huggingface.co/production/uploads/63056ac3fca1d8d92b8061a3/JPml4V6XbHKlDguaDLptf.png) 大阪メトロ今里筋線80系です。異世界80系になりがちです<br> OsakaMetro80, subway station, train, scenery, ceiling, ceiling light, scenery, headlight<br> OsakaMetro80, subway station, train, scenery, door, train station, outdoors, light, ceiling, ceiling light, scenery, taillight<br> <br> ## OsakaMetro400 ![00004-lametta_v2012 - 797876422.png](https://cdn-uploads.huggingface.co/production/uploads/63056ac3fca1d8d92b8061a3/QaB09xaNUpln_xkABeeAO.png) 大阪メトロ中央線400系です。近未来的なデザインが特徴です。異世界まで進化しがちです<br> OsakaMetro400, subway station, train station, scenery, headlight<br> OsakaMetro400, subway station, train station, train, scenery, taillight<br> <br> ## OsakaMetro30000 ![00011-lametta_v2012 - 1616524035.png](https://cdn-uploads.huggingface.co/production/uploads/63056ac3fca1d8d92b8061a3/A5LckzEo5PefwimlByogC.png) 大阪メトロ御堂筋線30000系です。よく崩壊します<br> OsakaMetro30000, subway station, 1boy, pants, shirt, male focus, white shirt, black pants, hat, solo, from behind, black hair, night, headlight<br> OsakaMetro30000, subway station, train station, scenery, night, railroad tracks, train, sign, door, real world location, ceiling, ceiling light, headlight<br> OsakaMetro30000, subway station, police, hat, train station, police uniform, motor vehicle, train, scenery, taillight<br> <br> ## TokyoMetro01_SD15 ![00010-lametta_v2012 - 3765848632.png](https://cdn-uploads.huggingface.co/production/uploads/63056ac3fca1d8d92b8061a3/Sk3sqo1XiCeDj5Bqv9YnM.png) 営団地下鉄(現:東京地下鉄(東京メトロ))銀座線01系です。ライト配置や前面非常ドアとか崩壊しがちです。<br> TokyoMetro01, subway station, train station, train, 6+boys, multiple boys, blurry, real world location, depth of field, railroad tracks, bag, multiple girls, scenery, ceiling, ceiling light, headlight<br> TokyoMetro01, subway station, train station, train, scenery, real world location, railroad tracks, multiple boys, multiple girls, ceiling, ceiling light, tail lights<br> <br> ## TokyoMetro02_SD15 ![00014-lametta_v2012 - 89141966.png](https://cdn-uploads.huggingface.co/production/uploads/63056ac3fca1d8d92b8061a3/3J4vQRIt78NMxygmMikei.png) 営団地下鉄 丸ノ内線02系です。ライト配置や前面非常ドアとか崩壊しがちです。<br> TokyoMetro02, subway station, train station, train, scenery, railroad tracks, real world location, realistic, night, ceiling, ceiling light, headlight<br> TokyoMetro02, subway station, train station, train, scenery, railroad tracks, ceiling, ceiling light, taillight<br> <br> ## TokyoMetro03_SD15 ![00006-lametta_v2012 - 106547182.png](https://cdn-uploads.huggingface.co/production/uploads/63056ac3fca1d8d92b8061a3/KX7XBNsq0kYit7GE9Iq_E.png) 営団地下鉄 日比谷線03系です。ライト配置や前面非常ドアとか崩壊しがちです。<br> TokyoMetro03, subway station, train station, train, scenery, railroad tracks, sign, real world location, bag, outdoors, day, ceiling, ceiling light, headlight<br> TokyoMetro03, subway station, train station, train, multiple boys, bag, scenery, railroad tracks, skirt, 6+boys, ceiling, ceiling light, tail lights<br> <br> ## TokyoMetro05_SD15 ![00010-lametta_v2012 - 2831566761.png](https://cdn-uploads.huggingface.co/production/uploads/63056ac3fca1d8d92b8061a3/gJijMDCGT--mjyuj48-bs.png) 営団地下鉄 東西線05系です。ライト配置や前面非常ドアとか崩壊しがちです。<br> TokyoMetro05, subway station, train station, train, railroad tracks, scenery, outdoors, bench, ceiling, ceiling light, headlight<br> TokyoMetro05, subway station, train station, train, railroad tracks, white shirt, pants, 1boy, shirt, black hair, scenery, male focus, hat, black pants, short sleeves, standing, real world location, black headwear, wide shot, ceiling, ceiling light, tail lights<br> <br> ## TokyoMetro10000_SD15 ![00005-lametta_v2012 - 64129083.png](https://cdn-uploads.huggingface.co/production/uploads/63056ac3fca1d8d92b8061a3/S6uqah-mW55nbE28GnSJj.png) 東京メトロ 有楽町線・副都心線10000系です。まるみを帯びた全面形状が特徴なんですが、そのまま大崩壊した画像が生成されがちです。<br> TokyoMetro10000, subway station, train station, train, scenery, sign, outdoors, 1boy, jacket, pants, standing, blurry, ceiling, ceiling light, headlight<br> TokyoMetro10000, subway station, train station, train, railroad tracks, real world location, scenery, photo background, realistic, 1boy, vehicle focus, ceiling, ceiling light, tail lights, headlight<br> TokyoMetro10000, subway station, train station, train, scenery, railroad tracks, real world location, building, outdoors, day, ceiling, ceiling light, tail lights<br> <br> ## TokyoMetro1000_SD15 ![00001-lametta_v2012 - 2771167769.png](https://cdn-uploads.huggingface.co/production/uploads/63056ac3fca1d8d92b8061a3/otNXcJEzLAMyouQFKUOu5.png) 東京メトロ 銀座線1000系です。レトロ感のある落ち着いたデザインなのですが、落ち着かない画像が多々生成されます<br> TokyoMetro1000, subway station, train station, train, scenery, sign, light, railroad tracks, ceiling, ceiling light, headlight<br> TokyoMetro1000, subway station, train station, multiple boys, train, hat, scenery, railroad tracks, real world location, ceiling, ceiling light, tail lights<br> <br> ## TokyoMetro5000_SD15 ![00005-lametta_v2012 - 2507980262.png](https://cdn-uploads.huggingface.co/production/uploads/63056ac3fca1d8d92b8061a3/lTzb1_Gl2gEeSocYojsqn.png) 営団地下鉄 東西線を走っていた5000系です。ライト配置や前面ドアとか崩壊しがちです。<br> TokyoMetro5000, subway station, train station, train, scenery, railroad tracks, outdoors, real world location, ceiling, ceiling light, headlight<br> TokyoMetro5000, subway station, train station, train, railroad tracks, black hair, 1boy, standing, 1girl, pants, shoes, wide shot, scenery, real world location, tail lights<br> <br> ## TokyoMetro6000_SD15 ![00002-lametta_v2012 - 1411113388.png](https://cdn-uploads.huggingface.co/production/uploads/63056ac3fca1d8d92b8061a3/nBaXrurrVyTd6fapl3q4E.png) 営団地下鉄 千代田線を走っていた6000系です。ライト配置や前面ドアとか崩壊しがちです。<br> TokyoMetro6000, subway station, train station, train, railroad tracks, scenery, chinese text, real world location, headlight<br> TokyoMetro6000, subway station, train station, train, scenery, fence, outdoors, real world location, night, railroad tracks, ceiling, ceiling light, tail lights<br> <br> ## TokyoMetro7000_SD15 ![00001-lametta_v2012 - 4203114707.png](https://cdn-uploads.huggingface.co/production/uploads/63056ac3fca1d8d92b8061a3/qlPri1_J6cm-apPBSgPZv.png) 営団地下鉄 有楽町線を走っていた7000系です。ライト配置や前面ドアとか崩壊しがちです。<br> TokyoMetro7000, subway station, train station, train, scenery, railroad tracks, tiles, ceiling, ceiling light, headlight<br> TokyoMetro7000, subway station, train station, train, railroad tracks, scenery, outdoors, real world location, day, building, tail lights<br> <br> ## TokyoMetro8000_SD15 ![00004-lametta_v2012 - 656905935.png](https://cdn-uploads.huggingface.co/production/uploads/63056ac3fca1d8d92b8061a3/8ezjl9HDYEcP--i74EuLY.png) 営団地下鉄 半蔵門線8000系です。ライト配置や前面ドアとか崩壊しがちです。<br> TokyoMetro8000, train, railroad tracks, real world location, outdoors, scenery, building, sky, day, power lines, headlight<br> TokyoMetro8000, subway station, train station, train, scenery, ceiling, ceiling light, headlight<br> TokyoMetro8000, subway station, train station, train, scenery, tail lights<br> <br>
blueapple8259/TinyKo-v5-c
blueapple8259
2024-02-03T05:48:31Z
64
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "ko", "dataset:maywell/korean_textbooks", "dataset:nlpai-lab/kullm-v2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-03T05:32:37Z
--- license: mit datasets: - maywell/korean_textbooks - nlpai-lab/kullm-v2 language: - ko --- [TinyKo-v5-b](https://huggingface.co/blueapple8259/TinyKo-v5-b)모델을 [kullm-v2](https://huggingface.co/datasets/nlpai-lab/kullm-v2)데이터셋으로 파인튜닝한 모델입니다. 주의: 성능이 매우 떨어지며 할루시네이션이 매우 심합니다. ## 모델 정보 model type: llama hidden size: 6 hidden size: 127 num attention heads: 16 num key value heads: 4
blueapple8259/TinyKo-v5-b
blueapple8259
2024-02-03T05:48:20Z
62
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "ko", "dataset:maywell/korean_textbooks", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-03T05:29:46Z
--- license: mit datasets: - maywell/korean_textbooks language: - ko --- [TinyKo-v5-a](https://huggingface.co/blueapple8259/TinyKo-v5-a)모델에서 약간의 파인튜닝을 한 진행한 모델입니다. 주의: 성능이 매우 떨어지며 할루시네이션이 매우 심합니다. ## 모델 정보 model type: llama hidden size: 6 hidden size: 127 num attention heads: 16 num key value heads: 4
mohdmurtuzakhan/G8_mistral7b_qlora_1211_v02
mohdmurtuzakhan
2024-02-03T05:46:26Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-03T05:46:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LoneStriker/Blue-Orchid-2x7b-AWQ
LoneStriker
2024-02-03T05:40:37Z
30
1
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "awq", "region:us" ]
text-generation
2024-02-03T05:37:35Z
--- license: apache-2.0 --- **Blue-Orchid-2x7b** GGUF: https://huggingface.co/nakodanei/Blue-Orchid-2x7b_GGUF Roleplaying focused MoE Mistral model. One expert is a merge of mostly RP models, the other is a merge of mostly storywriting models. So it should be good at both. The base model is SanjiWatsuki/Kunoichi-DPO-v2-7B. - Expert 1 is a merge of LimaRP, Limamono, Noromaid 0.4 DPO and good-robot. - Expert 2 is a merge of Erebus, Holodeck, Dans-AdventurousWinds-Mk2, Opus, Ashhwriter and good-robot. ## Prompt template (LimaRP): ``` ### Instruction: {system prompt} ### Input: User: {prompt} ### Response: Character: ``` Alpaca prompt template should work fine too.
Gigazinie/QA_240202
Gigazinie
2024-02-03T05:32:32Z
15
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "question-answering", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2024-02-03T04:51:09Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: QA_240202 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # QA_240202 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 9.2349 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 100 | 7.1184 | | No log | 2.0 | 200 | 6.8504 | | No log | 3.0 | 300 | 9.1831 | | No log | 4.0 | 400 | 9.7956 | | 0.3744 | 5.0 | 500 | 9.2349 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
LoneStriker/Blue-Orchid-2x7b-8.0bpw-h8-exl2
LoneStriker
2024-02-03T05:26:08Z
9
5
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-01T20:00:17Z
--- license: apache-2.0 --- **Blue-Orchid-2x7b** GGUF: https://huggingface.co/nakodanei/Blue-Orchid-2x7b_GGUF Roleplaying focused MoE Mistral model. One expert is a merge of mostly RP models, the other is a merge of mostly storywriting models. So it should be good at both. The base model is SanjiWatsuki/Kunoichi-DPO-v2-7B. - Expert 1 is a merge of LimaRP, Limamono, Noromaid 0.4 DPO and good-robot. - Expert 2 is a merge of Erebus, Holodeck, Dans-AdventurousWinds-Mk2, Opus, Ashhwriter and good-robot. ## Prompt template (LimaRP): ``` ### Instruction: {system prompt} ### Input: User: {prompt} ### Response: Character: ``` Alpaca prompt template should work fine too.
matteo1997/5_images_dreambooth_lora_step1000
matteo1997
2024-02-03T05:24:53Z
1
2
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-02-03T04:27:23Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora widget: - text: 'a green car in the forest' output: url: "image_0.png" - text: 'a green car in the forest' output: url: "image_1.png" - text: 'a green car in the forest' output: url: "image_2.png" - text: 'a green car in the forest' output: url: "image_3.png" base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a blue car license: openrail++ --- # SDXL LoRA DreamBooth - matteo1997/5_images_dreambooth_lora_step1000 <Gallery /> ## Model description These are matteo1997/5_images_dreambooth_lora_step1000 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a blue car to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](matteo1997/5_images_dreambooth_lora_step1000/tree/main) them in the Files & versions tab.
LoneStriker/Blue-Orchid-2x7b-6.0bpw-h6-exl2
LoneStriker
2024-02-03T05:20:33Z
8
1
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-01T19:56:14Z
--- license: apache-2.0 --- **Blue-Orchid-2x7b** GGUF: https://huggingface.co/nakodanei/Blue-Orchid-2x7b_GGUF Roleplaying focused MoE Mistral model. One expert is a merge of mostly RP models, the other is a merge of mostly storywriting models. So it should be good at both. The base model is SanjiWatsuki/Kunoichi-DPO-v2-7B. - Expert 1 is a merge of LimaRP, Limamono, Noromaid 0.4 DPO and good-robot. - Expert 2 is a merge of Erebus, Holodeck, Dans-AdventurousWinds-Mk2, Opus, Ashhwriter and good-robot. ## Prompt template (LimaRP): ``` ### Instruction: {system prompt} ### Input: User: {prompt} ### Response: Character: ``` Alpaca prompt template should work fine too.
LoneStriker/Blue-Orchid-2x7b-3.0bpw-h6-exl2
LoneStriker
2024-02-03T05:07:19Z
4
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-01T19:45:12Z
--- license: apache-2.0 --- **Blue-Orchid-2x7b** GGUF: https://huggingface.co/nakodanei/Blue-Orchid-2x7b_GGUF Roleplaying focused MoE Mistral model. One expert is a merge of mostly RP models, the other is a merge of mostly storywriting models. So it should be good at both. The base model is SanjiWatsuki/Kunoichi-DPO-v2-7B. - Expert 1 is a merge of LimaRP, Limamono, Noromaid 0.4 DPO and good-robot. - Expert 2 is a merge of Erebus, Holodeck, Dans-AdventurousWinds-Mk2, Opus, Ashhwriter and good-robot. ## Prompt template (LimaRP): ``` ### Instruction: {system prompt} ### Input: User: {prompt} ### Response: Character: ``` Alpaca prompt template should work fine too.
kanishka/smolm-autoreg-bpe-counterfactual-babylm-pipps_and_keys_to_it_all_removal-seed_211-1e-3
kanishka
2024-02-03T05:04:37Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "opt", "text-generation", "generated_from_trainer", "dataset:kanishka/counterfactual-babylm-pipps_and_keys_to_it_all_removal", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-02T06:33:53Z
--- tags: - generated_from_trainer datasets: - kanishka/counterfactual-babylm-pipps_and_keys_to_it_all_removal metrics: - accuracy model-index: - name: smolm-autoreg-bpe-counterfactual-babylm-pipps_and_keys_to_it_all_removal-seed_211-1e-3 results: - task: name: Causal Language Modeling type: text-generation dataset: name: kanishka/counterfactual-babylm-pipps_and_keys_to_it_all_removal type: kanishka/counterfactual-babylm-pipps_and_keys_to_it_all_removal metrics: - name: Accuracy type: accuracy value: 0.40997045687548256 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smolm-autoreg-bpe-counterfactual-babylm-pipps_and_keys_to_it_all_removal-seed_211-1e-3 This model was trained from scratch on the kanishka/counterfactual-babylm-pipps_and_keys_to_it_all_removal dataset. It achieves the following results on the evaluation set: - Loss: 3.4342 - Accuracy: 0.4100 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 64 - seed: 211 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 32000 - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 3.5982 | 1.0 | 18594 | 3.7814 | 0.3600 | | 3.3842 | 2.0 | 37188 | 3.5917 | 0.3792 | | 3.2578 | 3.0 | 55782 | 3.4820 | 0.3923 | | 3.181 | 4.0 | 74376 | 3.4444 | 0.3975 | | 3.127 | 5.0 | 92970 | 3.4062 | 0.4023 | | 3.0853 | 6.0 | 111564 | 3.3876 | 0.4042 | | 3.0444 | 7.0 | 130158 | 3.3845 | 0.4051 | | 3.0164 | 8.0 | 148752 | 3.3997 | 0.4067 | | 2.9875 | 9.0 | 167346 | 3.3890 | 0.4077 | | 2.9637 | 10.0 | 185940 | 3.3966 | 0.4072 | | 2.9414 | 11.0 | 204534 | 3.3861 | 0.4084 | | 2.9102 | 12.0 | 223128 | 3.3732 | 0.4095 | | 2.8918 | 13.0 | 241722 | 3.3955 | 0.4091 | | 2.8738 | 14.0 | 260316 | 3.3978 | 0.4096 | | 2.8518 | 15.0 | 278910 | 3.3918 | 0.4102 | | 2.8325 | 16.0 | 297504 | 3.4144 | 0.4098 | | 2.8187 | 17.0 | 316098 | 3.4153 | 0.4102 | | 2.7944 | 18.0 | 334692 | 3.4143 | 0.4103 | | 2.7783 | 19.0 | 353286 | 3.4294 | 0.4100 | | 2.7617 | 20.0 | 371880 | 3.4342 | 0.4100 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
macarious/torgo_xlsr_finetune_M05_old
macarious
2024-02-03T04:50:09Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-02-02T20:40:50Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: torgo_xlsr_finetune_M05 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # torgo_xlsr_finetune_M05 This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7932 - Wer: 0.3577 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.5534 | 0.99 | 1000 | 3.4455 | 1.0 | | 2.3481 | 1.98 | 2000 | 1.8194 | 0.8971 | | 0.9664 | 2.97 | 3000 | 1.2685 | 0.6818 | | 0.672 | 3.96 | 4000 | 1.3412 | 0.6112 | | 0.5432 | 4.96 | 5000 | 1.4455 | 0.5275 | | 0.4393 | 5.95 | 6000 | 1.3948 | 0.4761 | | 0.3761 | 6.94 | 7000 | 1.8967 | 0.4785 | | 0.3474 | 7.93 | 8000 | 1.5481 | 0.4545 | | 0.309 | 8.92 | 9000 | 1.7275 | 0.4354 | | 0.284 | 9.91 | 10000 | 1.9297 | 0.4438 | | 0.2582 | 10.9 | 11000 | 1.4894 | 0.3971 | | 0.2426 | 11.89 | 12000 | 1.6811 | 0.3840 | | 0.2406 | 12.88 | 13000 | 1.7411 | 0.3935 | | 0.2281 | 13.88 | 14000 | 1.7894 | 0.3732 | | 0.1874 | 14.87 | 15000 | 1.7728 | 0.3864 | | 0.1918 | 15.86 | 16000 | 2.0315 | 0.3768 | | 0.1693 | 16.85 | 17000 | 1.7024 | 0.3672 | | 0.1551 | 17.84 | 18000 | 1.7620 | 0.3684 | | 0.1645 | 18.83 | 19000 | 1.7186 | 0.3696 | | 0.1527 | 19.82 | 20000 | 1.7932 | 0.3577 | ### Framework versions - Transformers 4.26.1 - Pytorch 2.2.0 - Datasets 2.16.1 - Tokenizers 0.13.3
matteo1997/10_images_dreambooth_lora_step1000
matteo1997
2024-02-03T04:25:31Z
1
1
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-02-03T03:12:33Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora widget: - text: 'a pink car driven on the expressway' output: url: "image_0.png" - text: 'a pink car driven on the expressway' output: url: "image_1.png" - text: 'a pink car driven on the expressway' output: url: "image_2.png" - text: 'a pink car driven on the expressway' output: url: "image_3.png" base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a blue car license: openrail++ --- # SDXL LoRA DreamBooth - matteo1997/10_images_dreambooth_lora_step1000 <Gallery /> ## Model description These are matteo1997/10_images_dreambooth_lora_step1000 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a blue car to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](matteo1997/10_images_dreambooth_lora_step1000/tree/main) them in the Files & versions tab.
zhangHarry/orca_mini_3b_summary-epoch_0
zhangHarry
2024-02-03T04:21:53Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:pankajmathur/orca_mini_3b", "base_model:adapter:pankajmathur/orca_mini_3b", "region:us" ]
null
2024-01-20T03:57:01Z
--- library_name: peft base_model: psmathur/orca_mini_3b --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
rhplus0831/maid-yuzu-v2-mid
rhplus0831
2024-02-03T04:17:12Z
4
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "mergekit", "merge", "base_model:smelborp/MixtralOrochi8x7B", "base_model:merge:smelborp/MixtralOrochi8x7B", "base_model:ycros/BagelMIsteryTour-v2-8x7B", "base_model:merge:ycros/BagelMIsteryTour-v2-8x7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-03T03:43:41Z
--- base_model: - smelborp/MixtralOrochi8x7B - ycros/BagelMIsteryTour-v2-8x7B library_name: transformers tags: - mergekit - merge --- # maid-yuzu-v2-mid This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [smelborp/MixtralOrochi8x7B](https://huggingface.co/smelborp/MixtralOrochi8x7B) * [ycros/BagelMIsteryTour-v2-8x7B](https://huggingface.co/ycros/BagelMIsteryTour-v2-8x7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: model: path: smelborp/MixtralOrochi8x7B dtype: bfloat16 merge_method: slerp parameters: t: - value: 0.375 slices: - sources: - layer_range: [0, 32] model: model: path: smelborp/MixtralOrochi8x7B - layer_range: [0, 32] model: model: path: ycros/BagelMIsteryTour-v2-8x7B ```
Crystalcareai/CrystalMiniCPM
Crystalcareai
2024-02-03T04:07:55Z
1
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "minicpm", "text-generation", "generated_from_trainer", "conversational", "custom_code", "base_model:openbmb/MiniCPM-2B-sft-bf16", "base_model:finetune:openbmb/MiniCPM-2B-sft-bf16", "autotrain_compatible", "region:us" ]
text-generation
2024-02-03T04:06:10Z
--- base_model: openbmb/MiniCPM-2B-sft-bf16 tags: - generated_from_trainer model-index: - name: qlora-out results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: openbmb/MiniCPM-2B-sft-bf16 load_in_8bit: false load_in_4bit: false strict: false push_dataset_to_hub: datasets: - path: teknium/GPT4-LLM-Cleaned type: alpaca dataset_prepared_path: val_set_size: 0.05 adapter: lora_model_dir: sequence_len: 4096 max_packed_sequence_len: lora_r: 8 lora_alpha: 32 lora_dropout: 0.05 lora_target_modules: lora_target_linear: true lora_fan_in_fan_out: wandb_project: wandb_entity: wandb_watch: wandb_name: wandb_log_model: output_dir: ./qlora-out gradient_accumulation_steps: 2 micro_batch_size: 2 num_epochs: 1.5 optimizer: paged_adamw_8bit torchdistx_path: lr_scheduler: cosine learning_rate: 0.0001 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: true gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: true flash_attention: gptq_groupsize: gptq_model_v1: warmup_steps: 10 evals_per_epoch: 2 saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.1 fsdp: fsdp_config: special_tokens: trust_remote_code: true ``` </details><br> # qlora-out This model is a fine-tuned version of [openbmb/MiniCPM-2B-sft-bf16](https://huggingface.co/openbmb/MiniCPM-2B-sft-bf16) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0525 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 1.5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.0903 | 0.0 | 1 | 1.7199 | | 0.8959 | 0.5 | 1620 | 1.1007 | | 0.995 | 1.0 | 3240 | 1.0342 | | 0.864 | 1.5 | 4860 | 1.0525 | ### Framework versions - Transformers 4.38.0.dev0 - Pytorch 2.1.2+cu118 - Datasets 2.16.1 - Tokenizers 0.15.0
cinema4dr12/code-llama-7b-text-to-sql
cinema4dr12
2024-02-03T04:05:14Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:codellama/CodeLlama-7b-hf", "base_model:adapter:codellama/CodeLlama-7b-hf", "license:llama2", "region:us" ]
null
2024-02-03T03:26:49Z
--- license: llama2 library_name: peft tags: - trl - sft - generated_from_trainer datasets: - generator base_model: codellama/CodeLlama-7b-hf model-index: - name: code-llama-7b-text-to-sql results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # code-llama-7b-text-to-sql This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 3 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 6 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.7.2.dev0 - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
YoelCanaza/base-beans-classification-vit-model-yoel
YoelCanaza
2024-02-03T04:03:54Z
10
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-01-23T08:16:35Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy widget: - src: https://huggingface.co/YoelCanaza/base-beans-classification-vit-model-yoel/resolve/main/healthy.jpeg example_title: Healthy - src: https://huggingface.co/YoelCanaza/base-beans-classification-vit-model-yoel/resolve/main/bean_rust.jpeg example_title: Bean Rust model-index: - name: prueba-vit-model-yoel results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # prueba-vit-model-yoel This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0081 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0212 | 3.85 | 500 | 0.0081 | 1.0 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.13.3
AdAstra1/q-FrozenLake-v1-4x4-noSlippery
AdAstra1
2024-02-03T04:00:53Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-02-03T03:45:45Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="AdAstra1/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Deepakkori45/LLAma_classes
Deepakkori45
2024-02-03T03:56:49Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-03T03:56:41Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jbuch808/tqc-PandaPickAndPlace-v3
jbuch808
2024-02-03T03:55:57Z
0
0
stable-baselines3
[ "stable-baselines3", "PandaPickAndPlace-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-02-03T03:54:51Z
--- library_name: stable-baselines3 tags: - PandaPickAndPlace-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: TQC results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaPickAndPlace-v3 type: PandaPickAndPlace-v3 metrics: - type: mean_reward value: -50.00 +/- 0.00 name: mean_reward verified: false --- # **TQC** Agent playing **PandaPickAndPlace-v3** This is a trained model of a **TQC** agent playing **PandaPickAndPlace-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
CLMBR/pp-mod-subj-transformer-4
CLMBR
2024-02-03T03:44:20Z
2
0
transformers
[ "transformers", "pytorch", "opt", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-26T10:07:55Z
--- tags: - generated_from_trainer model-index: - name: pp-mod-subj2-transformer-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pp-mod-subj2-transformer-4 This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.9266 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 3052726 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-------:|:---------------:| | 4.2297 | 0.03 | 76320 | 4.2433 | | 4.0275 | 1.03 | 152640 | 4.0750 | | 3.9187 | 0.03 | 228960 | 4.0013 | | 3.8499 | 1.03 | 305280 | 3.9602 | | 3.8009 | 0.03 | 381600 | 3.9359 | | 3.754 | 1.03 | 457920 | 3.9211 | | 3.7162 | 0.03 | 534240 | 3.9103 | | 3.6839 | 1.03 | 610560 | 3.9040 | | 3.6566 | 0.03 | 686880 | 3.9007 | | 3.6332 | 1.03 | 763200 | 3.8988 | | 3.6064 | 0.03 | 839520 | 3.8968 | | 3.5872 | 1.03 | 915840 | 3.8964 | | 3.5702 | 0.03 | 992160 | 3.8978 | | 3.5552 | 1.03 | 1068480 | 3.8977 | | 3.5343 | 0.03 | 1144800 | 3.9006 | | 3.5197 | 1.03 | 1221120 | 3.9013 | | 3.5064 | 0.03 | 1297440 | 3.9038 | | 3.4941 | 0.03 | 1373760 | 3.9058 | | 3.481 | 1.03 | 1450080 | 3.9078 | | 3.4726 | 0.03 | 1526400 | 3.9097 | | 3.4675 | 1.03 | 1602720 | 3.9105 | | 3.4502 | 0.03 | 1679040 | 3.9132 | | 3.4381 | 1.03 | 1755360 | 3.9147 | | 3.4265 | 0.03 | 1831680 | 3.9167 | | 3.4144 | 1.03 | 1908000 | 3.9173 | | 3.4049 | 0.03 | 1984320 | 3.9193 | | 3.3904 | 0.03 | 2060640 | 3.9211 | | 3.3792 | 1.03 | 2136960 | 3.9233 | | 3.3687 | 0.03 | 2213280 | 3.9250 | | 3.3597 | 1.03 | 2289600 | 3.9263 | | 3.3466 | 0.03 | 2365920 | 3.9275 | | 3.3407 | 1.03 | 2442240 | 3.9272 | | 3.3293 | 0.03 | 2518560 | 3.9300 | | 3.3238 | 0.03 | 2594880 | 3.9299 | | 3.3127 | 1.03 | 2671200 | 3.9311 | | 3.3062 | 0.03 | 2747520 | 3.9313 | | 3.3036 | 0.03 | 2823840 | 3.9303 | | 3.2911 | 1.03 | 2900160 | 3.9300 | | 3.2841 | 0.03 | 2976480 | 3.9290 | | 3.2768 | 1.02 | 3052726 | 3.9266 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.13.3
acrastt/Marx-3B-V2
acrastt
2024-02-03T03:37:03Z
1,526
25
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "en", "dataset:totally-not-an-llm/EverythingLM-data-V2-sharegpt", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-22T22:41:21Z
--- language: - en license: apache-2.0 library_name: transformers datasets: - totally-not-an-llm/EverythingLM-data-V2-sharegpt model-index: - name: Marx-3B-V2 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 44.03 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Marx-3B-V2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 72.92 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Marx-3B-V2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 27.84 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Marx-3B-V2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 39.92 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Marx-3B-V2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 66.54 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Marx-3B-V2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 1.21 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Marx-3B-V2 name: Open LLM Leaderboard --- <a href="https://www.buymeacoffee.com/acrastt" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a> This is [OpenLLaMA 3B V2](https://huggingface.co/openlm-research/open_llama_3b_v2) finetuned on [EverythingLM Data V2(ShareGPT format)](https://huggingface.co/datasets/totally-not-an-llm/EverythingLM-data-V2-sharegpt) for 2 epochs. Prompt template: ``` ### HUMAN: {prompt} ### RESPONSE: <leave a newline for the model to answer> ``` q4_1 GGML quant available [here](https://huggingface.co/NikolayKozloff/Marx-3B-V2/).</br> q4_1 GGUF quant available [here]( https://huggingface.co/NikolayKozloff/Marx-3B-V2-GGUF/). # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_acrastt__Marx-3B-V2) | Metric |Value| |---------------------------------|----:| |Avg. |42.08| |AI2 Reasoning Challenge (25-Shot)|44.03| |HellaSwag (10-Shot) |72.92| |MMLU (5-Shot) |27.84| |TruthfulQA (0-shot) |39.92| |Winogrande (5-shot) |66.54| |GSM8k (5-shot) | 1.21|
acrastt/Bean-3B
acrastt
2024-02-03T03:36:26Z
1,522
2
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "en", "dataset:64bits/lima_vicuna_format", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-09-02T00:06:46Z
--- language: - en license: apache-2.0 library_name: transformers datasets: - 64bits/lima_vicuna_format pipeline_tag: text-generation model-index: - name: Bean-3B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 40.36 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Bean-3B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 72.0 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Bean-3B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 26.43 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Bean-3B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 36.11 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Bean-3B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 65.67 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Bean-3B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 0.53 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Bean-3B name: Open LLM Leaderboard --- <a href="https://www.buymeacoffee.com/acrastt" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a> This is [OpenLLaMA 3B V2](https://huggingface.co/openlm-research/open_llama_3b_v2) finetuned on [LIMA(ShareGPT format)](https://huggingface.co/datasets/64bits/lima_vicuna_format) for 2 epochs. Prompt template: ``` ### HUMAN: {prompt} ### RESPONSE: <leave a newline for the model to answer> ``` GGUF quantizations available [here](https://huggingface.co/maddes8cht/acrastt-Bean-3B-gguf). # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_acrastt__Bean-3B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 40.18 | | ARC (25-shot) | 40.36 | | HellaSwag (10-shot) | 72.0 | | MMLU (5-shot) | 26.43 | | TruthfulQA (0-shot) | 36.11 | | Winogrande (5-shot) | 65.67 | | GSM8K (5-shot) | 0.53 | # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_acrastt__Bean-3B) | Metric |Value| |---------------------------------|----:| |Avg. |40.18| |AI2 Reasoning Challenge (25-Shot)|40.36| |HellaSwag (10-Shot) |72.00| |MMLU (5-Shot) |26.43| |TruthfulQA (0-shot) |36.11| |Winogrande (5-shot) |65.67| |GSM8k (5-shot) | 0.53|
dengh/a2c-PandaReachDense-v3
dengh
2024-02-03T03:36:06Z
0
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-02-03T03:28:08Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v3 type: PandaReachDense-v3 metrics: - type: mean_reward value: -0.23 +/- 0.14 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v3** This is a trained model of a **A2C** agent playing **PandaReachDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
acrastt/RedPajama-INCITE-Chat-Instruct-3B-V1
acrastt
2024-02-03T03:35:56Z
1,535
1
transformers
[ "transformers", "pytorch", "safetensors", "gpt_neox", "text-generation", "en", "dataset:togethercomputer/RedPajama-Data-1T", "dataset:databricks/databricks-dolly-15k", "dataset:OpenAssistant/oasst1", "dataset:Muennighoff/natural-instructions", "dataset:Muennighoff/P3", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-27T19:42:41Z
--- language: - en license: apache-2.0 library_name: transformers datasets: - togethercomputer/RedPajama-Data-1T - databricks/databricks-dolly-15k - OpenAssistant/oasst1 - Muennighoff/natural-instructions - Muennighoff/P3 pipeline_tag: text-generation model-index: - name: RedPajama-INCITE-Chat-Instruct-3B-V1 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 42.58 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/RedPajama-INCITE-Chat-Instruct-3B-V1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 67.48 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/RedPajama-INCITE-Chat-Instruct-3B-V1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 25.99 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/RedPajama-INCITE-Chat-Instruct-3B-V1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 33.62 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/RedPajama-INCITE-Chat-Instruct-3B-V1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 64.8 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/RedPajama-INCITE-Chat-Instruct-3B-V1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 0.91 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/RedPajama-INCITE-Chat-Instruct-3B-V1 name: Open LLM Leaderboard --- <a href="https://www.buymeacoffee.com/acrastt" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a> This is an experimental merge of models [RedPajama-INCITE-Chat-3B-V1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-3B-v1) and [RedPajama-INCITE-Instruct-3B-V1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Instruct-3B-v1).</br> This model is adaptive to prompt templates, but this template is recommended: ``` HUMAN: {prompt} ASSISTANT: ``` Feel free to change HUMAN or ASSISTANT. It will not change much.</br> GGML versions [here](https://huggingface.co/adadbbb/pajama_ggml) (Note that this is only compatible with [koboldcpp](https://github.com/LostRuins/koboldcpp)). # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_acrastt__RedPajama-INCITE-Chat-Instruct-3B-V1) | Metric | Value | |-----------------------|---------------------------| | Avg. | 39.23 | | ARC (25-shot) | 42.58 | | HellaSwag (10-shot) | 67.48 | | MMLU (5-shot) | 25.99 | | TruthfulQA (0-shot) | 33.62 | | Winogrande (5-shot) | 64.8 | | GSM8K (5-shot) | 0.91 | # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_acrastt__RedPajama-INCITE-Chat-Instruct-3B-V1) | Metric |Value| |---------------------------------|----:| |Avg. |39.23| |AI2 Reasoning Challenge (25-Shot)|42.58| |HellaSwag (10-Shot) |67.48| |MMLU (5-Shot) |25.99| |TruthfulQA (0-shot) |33.62| |Winogrande (5-shot) |64.80| |GSM8k (5-shot) | 0.91|
acrastt/Puma-3B
acrastt
2024-02-03T03:35:27Z
1,531
3
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "en", "dataset:totally-not-an-llm/sharegpt-hyperfiltered-3k", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-16T21:53:15Z
--- language: - en license: apache-2.0 library_name: transformers datasets: - totally-not-an-llm/sharegpt-hyperfiltered-3k pipeline_tag: text-generation model-index: - name: Puma-3B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 41.3 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Puma-3B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 71.85 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Puma-3B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 27.51 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Puma-3B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 38.34 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Puma-3B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 66.38 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Puma-3B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 0.76 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Puma-3B name: Open LLM Leaderboard --- <a href="https://www.buymeacoffee.com/acrastt" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a> This is [OpenLLaMA 3B V2](https://huggingface.co/openlm-research/open_llama_3b_v2) finetuned on [ShareGPT Hyperfiltered](https://huggingface.co/datasets/totally-not-an-llm/sharegpt-hyperfiltered-3k) for 1 epochs. Prompt template: ``` ### HUMAN: {prompt} ### RESPONSE: <leave a newline for the model to answer> ``` GGML quants available [here](https://huggingface.co/TheBloke/Puma-3b-GGML).</br> GPTQ quants available [here](https://huggingface.co/TheBloke/Puma-3b-GPTQ). Note: Don't expect this model to be good, I was just starting out to finetune. So don't roast me please! # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_acrastt__Puma-3B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 41.02 | | ARC (25-shot) | 41.3 | | HellaSwag (10-shot) | 71.85 | | MMLU (5-shot) | 27.51 | | TruthfulQA (0-shot) | 38.34 | | Winogrande (5-shot) | 66.38 | | GSM8K (5-shot) | 0.76 | # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_acrastt__Puma-3B) | Metric |Value| |---------------------------------|----:| |Avg. |41.02| |AI2 Reasoning Challenge (25-Shot)|41.30| |HellaSwag (10-Shot) |71.85| |MMLU (5-Shot) |27.51| |TruthfulQA (0-shot) |38.34| |Winogrande (5-shot) |66.38| |GSM8k (5-shot) | 0.76|
Verias/convo-devia
Verias
2024-02-03T03:27:22Z
6
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "license:cdla-permissive-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-03T03:25:18Z
--- license: cdla-permissive-2.0 ---
saishf/Kuno-Lake-7B-GGUF
saishf
2024-02-03T03:09:47Z
11
2
null
[ "gguf", "mergekit", "merge", "arxiv:2311.03099", "arxiv:2306.01708", "base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B", "base_model:merge:SanjiWatsuki/Kunoichi-DPO-v2-7B", "base_model:mistralai/Mistral-7B-v0.1", "base_model:merge:mistralai/Mistral-7B-v0.1", "base_model:senseable/WestLake-7B-v2", "base_model:merge:senseable/WestLake-7B-v2", "endpoints_compatible", "region:us" ]
null
2024-02-03T02:33:23Z
--- base_model: - mistralai/Mistral-7B-v0.1 - senseable/WestLake-7B-v2 - SanjiWatsuki/Kunoichi-DPO-v2-7B tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base. ### Models Merged The following models were included in the merge: * [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2) * [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: mistralai/Mistral-7B-v0.1 # No parameters necessary for base model - model: senseable/WestLake-7B-v2 parameters: density: 0.53 weight: 0.65 - model: SanjiWatsuki/Kunoichi-DPO-v2-7B parameters: density: 0.53 weight: 0.35 merge_method: dare_ties base_model: mistralai/Mistral-7B-v0.1 parameters: int8_mask: true dtype: bfloat16 ```
ND911/Franken-Maid-Slerp
ND911
2024-02-03T03:09:19Z
5
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE", "ND911/EE-LMaid-7B-Slerp", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-03T03:02:48Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE - ND911/EE-LMaid-7B-Slerp --- ![](maid.jpeg) Experimental RP merges - using SillyTavern with Min-P SanjiWatsuki/Loyal-Macaroni-Maid-7B, merged with ND911/EE-Maid-7B-Slerp which is a merge of SanjiWatsuki/Silicon-Maid-7B and maywell/Synatra-7B-v0.3-RP EE-LMaid-7B-Slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [SanjiWatsuki/Loyal-Macaroni-Maid-7B](https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B) * [ND911/EE-Maid-7B-Slerp](https://huggingface.co/ND911/EE-Maid-7B-Slerp) # Franken-Maid-Slerp Franken-Maid-Slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE](https://huggingface.co/SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE) * [ND911/EE-LMaid-7B-Slerp](https://huggingface.co/ND911/EE-LMaid-7B-Slerp) ## 🧩 Configuration ```yaml slices: - sources: - model: SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE layer_range: [0, 32] - model: ND911/EE-LMaid-7B-Slerp layer_range: [0, 32] merge_method: slerp base_model: ND911/EE-LMaid-7B-Slerp parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
robbie0/vntl-7b-v0.3.1-hf-exl2
robbie0
2024-02-03T03:02:45Z
14
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "translation", "ja", "en", "dataset:lmg-anon/VNTL-v2.5-1k", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
translation
2024-02-02T18:30:25Z
--- license: llama2 datasets: - lmg-anon/VNTL-v2.5-1k language: - ja - en pipeline_tag: translation --- # VNTL v3.5.1 EXL2 quantization branches - main (4.0bpw) - 5.6bpw - 8.0bpw original (unquantized): <https://huggingface.co/lmg-anon/vntl-7b-v0.3.1-hf> --------- This is a merge of the [experimental VNTL v0.3.1 lora](https://huggingface.co/lmg-anon/vntl-7b-v0.3.1-lora) created using the [VNTL-v2.5-1k](https://huggingface.co/datasets/lmg-anon/VNTL-v2.5-1k) dataset. This is an prompt example: ``` <<START>> Name: Uryuu Shingo (瓜生 新吾) | Gender: Male | Aliases: Onii-chan (お兄ちゃん) Name: Uryuu Sakuno (瓜生 桜乃) | Gender: Female <<JAPANESE>> [桜乃]: 『……ごめん』 <<ENGLISH>> (fidelity = absolute) [Sakuno]: 『... Sorry.』</s> <<JAPANESE>> [新吾]: 「ううん、こう言っちゃなんだけど、迷子でよかったよ。桜乃は可愛いから、いろいろ心配しちゃってたんだぞ俺」 <<ENGLISH>> (fidelity = high) ``` The generated translation for that prompt, with temperature 0, is: ``` [Shingo]: 「No, don't apologize. I'm just glad you're safe. You're so cute, Sakuno, I was worried sick.」 ```
Jimmyhd/llama2TimeBook
Jimmyhd
2024-02-03T02:58:01Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-03T00:23:11Z
--- tags: - autotrain - text-generation widget: - text: "I love AutoTrain because " license: other --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
rizla/rizla-17
rizla
2024-02-03T02:55:19Z
235
1
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "dpo", "merge", "mergekit", "base_model:mistralai/Mixtral-8x7B-Instruct-v0.1", "base_model:finetune:mistralai/Mixtral-8x7B-Instruct-v0.1", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-02T21:23:06Z
--- license: cc-by-nc-nd-4.0 base_model: - mistralai/Mixtral-8x7B-Instruct-v0.1 tags: - dpo - merge - mergekit --- # rizla been cooking while singing # This is an experimental model that I made by merging two 2expmixtrals The mergekitty is a tool that lets me mix and match different models into one big model, keeping all the smarts and skills of the original models. The llama70b is a huge language model that can make words for all kinds of things and ways, based on the GPT-4 thingy. The merged model has 17 billion parraraameters and was made to run on 8gb of ram minimum in q3KL gguf. ## Merge me baby one more time ### Sending this contraption out straight to mergeland, wwhheeeeeeeeeeeee LFG 🚀
Nanum2/distilbert-base-uncased-finetuned-emotion
Nanum2
2024-02-03T02:52:12Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-03T01:19:15Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.9275 - name: F1 type: f1 value: 0.9272505802943928 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2046 - Accuracy: 0.9275 - F1: 0.9273 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8195 | 1.0 | 250 | 0.3015 | 0.9095 | 0.9087 | | 0.2451 | 2.0 | 500 | 0.2046 | 0.9275 | 0.9273 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
CLMBR/full-lstm-0
CLMBR
2024-02-03T02:51:16Z
2
0
transformers
[ "transformers", "pytorch", "rnn", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2024-01-26T10:07:33Z
--- tags: - generated_from_trainer model-index: - name: full2-lstm-0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # full2-lstm-0 This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.9726 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 3052726 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-------:|:---------------:| | 4.7998 | 0.03 | 76320 | 4.7682 | | 4.5113 | 1.03 | 152640 | 4.4833 | | 4.3656 | 0.03 | 228960 | 4.3470 | | 4.2808 | 1.03 | 305280 | 4.2632 | | 4.2151 | 0.03 | 381600 | 4.2061 | | 4.17 | 0.03 | 457920 | 4.1647 | | 4.1358 | 1.03 | 534240 | 4.1329 | | 4.1038 | 0.03 | 610560 | 4.1080 | | 4.0759 | 1.03 | 686880 | 4.0886 | | 4.0515 | 0.03 | 763200 | 4.0722 | | 4.0312 | 1.03 | 839520 | 4.0589 | | 4.0129 | 0.03 | 915840 | 4.0478 | | 4.0007 | 1.03 | 992160 | 4.0386 | | 3.9838 | 0.03 | 1068480 | 4.0304 | | 3.9688 | 1.03 | 1144800 | 4.0238 | | 3.9562 | 0.03 | 1221120 | 4.0180 | | 3.9415 | 1.03 | 1297440 | 4.0129 | | 3.9309 | 0.03 | 1373760 | 4.0080 | | 3.9178 | 1.03 | 1450080 | 4.0038 | | 3.9189 | 0.03 | 1526400 | 4.0004 | | 3.9119 | 1.03 | 1602720 | 3.9974 | | 3.9059 | 0.03 | 1679040 | 3.9936 | | 3.9076 | 1.03 | 1755360 | 3.9911 | | 3.9022 | 0.03 | 1831680 | 3.9889 | | 3.8923 | 1.03 | 1908000 | 3.9861 | | 3.8881 | 0.03 | 1984320 | 3.9846 | | 3.8813 | 1.03 | 2060640 | 3.9834 | | 3.8772 | 0.03 | 2136960 | 3.9821 | | 3.8762 | 0.03 | 2213280 | 3.9805 | | 3.869 | 1.03 | 2289600 | 3.9791 | | 3.8621 | 0.03 | 2365920 | 3.9779 | | 3.8579 | 0.03 | 2442240 | 3.9772 | | 3.8495 | 1.03 | 2518560 | 3.9763 | | 3.8465 | 0.03 | 2594880 | 3.9757 | | 3.8429 | 0.03 | 2671200 | 3.9751 | | 3.846 | 1.03 | 2747520 | 3.9743 | | 3.8439 | 0.03 | 2823840 | 3.9737 | | 3.8466 | 0.03 | 2900160 | 3.9731 | | 3.8495 | 1.03 | 2976480 | 3.9729 | | 3.8507 | 0.02 | 3052726 | 3.9726 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.13.3
Shiyaku/caulkinumCheckPoint
Shiyaku
2024-02-03T02:48:58Z
0
3
null
[ "license:openrail", "region:us" ]
null
2023-04-14T09:11:25Z
--- license: openrail --- # caulkinum Series Checkpoint (caulkinumシリーズ チェックポイント) It is also available at CIVITAI (CIVITAIにも置いてあります) <https://civitai.com/user/489> ※ 日本語での説明文は後半にあります ![](./img/00100-1630068329-caulkinumV2_AR3-%5B70dfd3963f%5D-best_quality%2C_high_detailed%2C_Faint_lips%2CUC_realistic%2C_cinematic_lighting_petite_1girl_gradient_hair_yellow_eyes%2C_seductive_happy.png) Sample - 作例 (caulkinumV2_AR3) ``` best quality, high detailed, Faint lips,UC:realistic, cinematic lighting petite 1girl gradient hair yellow eyes, seductive happy school_Uniform, Disney land, warm light sunset, rays light sparkles lens flare deep shadows, depth of field peerless scenery sentimental Negative prompt: title text, signature, watermark, username, artist name EasyNegative NSFW Steps: 15, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 1630068329, Size: 640x256, Model hash: 70dfd3963f, Model: caulkinumV2_AR3, VAE hash: df3c506e51, VAE: w14_kl-f8-anime2.vae.pt, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 2, Hires steps: 10, Hires upscaler: R-ESRGAN 4x+ Anime6B ``` There are more images in the img folder (imgフォルダには他の画像もあります) <https://huggingface.co/Shiyaku/caulkinumCheckPoint/tree/main/img> ## Overview We did a hierarchical merge with Silicon28 as the base, focusing on the Elysium series. I made many attempts to achieve my preferred style of painting, but I did not keep records, so the detailed recipe was lost. I have decided to leave them here, organizing the remaining results. I will be very happy if someone likes these models. ## Model Variations ### 1. caulkinumV2 series for Anime Style These models are used to generate so-called 2D illustrations. Currently, there are any types of models with different styles. Particular emphasis is placed on the expression of eyes and light. #### modern-game-like - [caulkinumV2_AR for modern gamegraphic](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_AR.safetensors) - [caulkinumV2_AR2 for modern gamegraphic](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_AR2.safetensors) - [caulkinumV2_AR3 for modern gamegraphic](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_AR3.safetensors) - [caulkinumV2_A4FS for modern gamegraphic](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_A4FS.safetensors) #### anime-like - [caulkinumV2_ARCT for Anime](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_ARCT.safetensors) - [caulkinumV2_ARc for Anime](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_ARc.safetensors) - [caulkinumV2_ARNL for Anime](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_ARNL.safetensors) - [caulkinumV2_HOG for NSFW](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_HOG.safetensors) - [caulkinumV2_HOG2 for NSFW](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_HOG2.safetensors) #### cel-artistic - [caulkinumV2_NCA for cel-artistic](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_NCA.safetensors) #### Pop Artistic - [caulkinumV2_NLPS for POP Art](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_NCA.safetensors) #### fluffy-beastman-anime-like - [caulkinumV2_FGA for furry](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_FGA.safetensors) ### 2. caulkinumV1_RL series for semi Realistic Style This model is aimed at the borderline between anime and live-action, which is generally referred to as 2.5D. This model also focuses on the expression of eyes and light. #### 2.5D for semi realistic - [caulkinumV1_RL for 2.5D](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV1_RL.safetensors) - [caulkinumV1_RLBT for 2.5D](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV1_RLBT.safetensors) #### 2D Characters + Realistic Backgrounds - [caulkinumV1_RLCT for VirtualReal](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV1_RLCT.safetensors) ### 3. caulkinumV1_S Series for Realistic Style This model is for realistic character generation. We aimed for a JRPG character style, more western than oriental. #### realistic - [caulkinumV1_S8 for Realistic](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV1_S8.safetensors) - [caulkinumV1_S4FN for Realistic](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV1_S4FN.safetensors) - [caulkinumV1_SJA for Realistic](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV1_SJA.safetensors) ## 概要 Silicon28を根底としてElysiumシリーズを中心に階層マージしました。 好みの画風となるように試行錯誤を繰り返しましたが、記録を取っていなかったので詳細なレシピは失われました。 私は残った成果を整理しつつ、この場所へ残しておくことにしました。 これらのモデルを誰かが気に入ってくれたなら私はとても幸せでしょう。 ## モデルバリエーション ### 1. caulkinumV2シリーズ for Anime Style いわゆる2Dイラストを生成するためのモデルです。 現在は画風違いで数種類あります。 特に瞳と光の表現に力を入れています。 #### モダン・ゲーム的 - [caulkinumV2_AR for modern gamegraphic](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_AR.safetensors) - [caulkinumV2_AR2 for modern gamegraphic](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_AR2.safetensors) - [caulkinumV2_AR3 for modern gamegraphic](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_AR3.safetensors) - [caulkinumV2_A4FS for modern gamegraphic](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_A4FS.safetensors) #### アニメ的 - [caulkinumV2_ARCT for Anime](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_ARCT.safetensors) - [caulkinumV2_ARc for Anime](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_ARc.safetensors) - [caulkinumV2_ARNL for Anime](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_ARNL.safetensors) - [caulkinumV2_HOG for NSFW](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_HOG.safetensors) - [caulkinumV2_HOG2 for NSFW](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_HOG2.safetensors) #### セル画的 - [caulkinumV2_NCA for cel-artistic](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_NCA.safetensors) #### ポップアート的 - [caulkinumV2_NLPS for POP Art](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_NCA.safetensors) #### もふもふ獣人アニメ的 - [caulkinumV2_FGA for furry](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_FGA.safetensors) ### 2. caulkinumV1_RLシリーズ for semi Realistic Style 一般的に2.5Dと言われるような、アニメと実写の境目を目指したモデルです。 こちらも瞳と光の表現に力を入れています。 #### 2.5D 半写実的 - [caulkinumV1_RL for 2.5D](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV1_RL.safetensors) - [caulkinumV1_RLBT for 2.5D](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV1_RLBT.safetensors) #### 2Dキャラクター + リアル背景 - [caulkinumV1_RLCT for VirtualReal Style](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV1_RLCT.safetensors) ### 3. caulkinumV1_Sシリーズ for Realistic Style 写実的なキャラクター生成を行うためのモデルです。 東洋的よりは西洋寄り、JRPGキャラクター的な方向性を目指しました。 #### 写実的 - [caulkinumV1_S8 for Realistic](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV1_S8.safetensors) - [caulkinumV1_S4FN for Realistic](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV1_S4FN.safetensors) - [caulkinumV1_SJA for Realistic](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV1_SJA.safetensors) ## Illustration Sample (サンプルイラスト) Note: N2_CA renamed to V2_NCA ※ N2_CA は V2_NCA に名前変更しました ![](./img/tmp37_g1a63.png) ![](./img/tmp5fjy7hi2.png) ![](./img/tmpq8k5vcs1.png) ![](./img/tmpz8xxghst.png) ![](./img/tmpt3u7mq8u.png) ![](./img/V1_RL-1.png) ![](./img/EV1ChS8S4.png) ![](./img/ChS8S4-1.png) ![](./img/chis8sja.png) ![](./img/ar2arctncafga.png) ![](./img/2c4FSNLNCPS-2.png) ![](./img/a45ar2hog.png) ![](./img/tmptgcayf2t.png) ## Turn to the Afterword. (あとがきに変えて) ### Donation (寄付) Coffee will increase work efficiency and motivation. コーヒーは作業効率を高めモチベーションをアップさせるでしょう。 <https://ko-fi.com/489489> ### thanks (謝辞) Thank you for reading to the end. I hope one of you will use it. We would also like to thank all those involved in SD development and those who developed the models for the merge. 最後まで読んでくれてありがとうございます。 使用してくれるか方が一人でもいることを祈っています。 また、SD開発に関わる全ての方、マージ用のモデルを開発した方々に感謝いたします。 ### Contact (連絡先) #### Twitter <https://twitter.com/Shiyaku> #### pixiv <https://www.pixiv.net/users/63951151> #### civitai <https://civitai.com/user/489> Note: I have posted many images on pixiv and civitai for your reference. ※ pixivとcivitaiには沢山の画像を投稿しているので参考にしてください
Askahoward/NeuralPipe-7B-slerp
Askahoward
2024-02-03T02:40:17Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "OpenPipe/mistral-ft-optimized-1218", "mlabonne/NeuralHermes-2.5-Mistral-7B", "base_model:OpenPipe/mistral-ft-optimized-1218", "base_model:merge:OpenPipe/mistral-ft-optimized-1218", "base_model:mlabonne/NeuralHermes-2.5-Mistral-7B", "base_model:merge:mlabonne/NeuralHermes-2.5-Mistral-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-03T02:35:15Z
--- tags: - merge - mergekit - lazymergekit - OpenPipe/mistral-ft-optimized-1218 - mlabonne/NeuralHermes-2.5-Mistral-7B base_model: - OpenPipe/mistral-ft-optimized-1218 - mlabonne/NeuralHermes-2.5-Mistral-7B --- # NeuralPipe-7B-slerp NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218) * [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) ## 🧩 Configuration ```yaml slices: - sources: - model: OpenPipe/mistral-ft-optimized-1218 layer_range: [0, 32] - model: mlabonne/NeuralHermes-2.5-Mistral-7B layer_range: [0, 32] merge_method: slerp base_model: OpenPipe/mistral-ft-optimized-1218 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Askahoward/NeuralPipe-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
fong33/NeuralPipe-7B-slerp
fong33
2024-02-03T02:39:12Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "OpenPipe/mistral-ft-optimized-1218", "mlabonne/NeuralHermes-2.5-Mistral-7B", "base_model:OpenPipe/mistral-ft-optimized-1218", "base_model:merge:OpenPipe/mistral-ft-optimized-1218", "base_model:mlabonne/NeuralHermes-2.5-Mistral-7B", "base_model:merge:mlabonne/NeuralHermes-2.5-Mistral-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-03T02:35:22Z
--- tags: - merge - mergekit - lazymergekit - OpenPipe/mistral-ft-optimized-1218 - mlabonne/NeuralHermes-2.5-Mistral-7B base_model: - OpenPipe/mistral-ft-optimized-1218 - mlabonne/NeuralHermes-2.5-Mistral-7B --- # NeuralPipe-7B-slerp NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218) * [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) ## 🧩 Configuration ```yaml slices: - sources: - model: OpenPipe/mistral-ft-optimized-1218 layer_range: [0, 32] - model: mlabonne/NeuralHermes-2.5-Mistral-7B layer_range: [0, 32] merge_method: slerp base_model: OpenPipe/mistral-ft-optimized-1218 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "fong33/NeuralPipe-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
chathuranga-jayanath/codet5-small-v12
chathuranga-jayanath
2024-02-03T02:35:47Z
29
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:Salesforce/codet5-small", "base_model:finetune:Salesforce/codet5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-02T10:32:58Z
--- license: apache-2.0 base_model: Salesforce/codet5-small tags: - generated_from_trainer model-index: - name: codet5-small-v12 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # codet5-small-v12 This model is a fine-tuned version of [Salesforce/codet5-small](https://huggingface.co/Salesforce/codet5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1925 - Bleu Score: 0.0003 - Gen Len: 13.4845 ## Model description Trained, - on: chathuranga-jayanath/selfapr-full-train-data - epoch: 3 ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 30 - eval_batch_size: 30 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu Score | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:----------:|:-------:| | 0.2855 | 1.0 | 11070 | 0.2338 | 0.0003 | 13.528 | | 0.2409 | 2.0 | 22140 | 0.2030 | 0.0003 | 13.4318 | | 0.2271 | 3.0 | 33210 | 0.1925 | 0.0003 | 13.4845 | ### Framework versions - Transformers 4.38.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
lchakkei/model
lchakkei
2024-02-03T02:29:28Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-01T17:12:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
frankc350/NeuralPipe-7B-slerp
frankc350
2024-02-03T02:28:03Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "OpenPipe/mistral-ft-optimized-1218", "mlabonne/NeuralHermes-2.5-Mistral-7B", "base_model:OpenPipe/mistral-ft-optimized-1218", "base_model:merge:OpenPipe/mistral-ft-optimized-1218", "base_model:mlabonne/NeuralHermes-2.5-Mistral-7B", "base_model:merge:mlabonne/NeuralHermes-2.5-Mistral-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-03T02:23:45Z
--- tags: - merge - mergekit - lazymergekit - OpenPipe/mistral-ft-optimized-1218 - mlabonne/NeuralHermes-2.5-Mistral-7B base_model: - OpenPipe/mistral-ft-optimized-1218 - mlabonne/NeuralHermes-2.5-Mistral-7B --- # NeuralPipe-7B-slerp NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218) * [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) ## 🧩 Configuration ```yaml slices: - sources: - model: OpenPipe/mistral-ft-optimized-1218 layer_range: [0, 32] - model: mlabonne/NeuralHermes-2.5-Mistral-7B layer_range: [0, 32] merge_method: slerp base_model: OpenPipe/mistral-ft-optimized-1218 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "frankc350/NeuralPipe-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
dictatee/NeuralPipe-7B-slerp
dictatee
2024-02-03T02:24:30Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "OpenPipe/mistral-ft-optimized-1218", "mlabonne/NeuralHermes-2.5-Mistral-7B", "base_model:OpenPipe/mistral-ft-optimized-1218", "base_model:merge:OpenPipe/mistral-ft-optimized-1218", "base_model:mlabonne/NeuralHermes-2.5-Mistral-7B", "base_model:merge:mlabonne/NeuralHermes-2.5-Mistral-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-03T02:20:25Z
--- tags: - merge - mergekit - lazymergekit - OpenPipe/mistral-ft-optimized-1218 - mlabonne/NeuralHermes-2.5-Mistral-7B base_model: - OpenPipe/mistral-ft-optimized-1218 - mlabonne/NeuralHermes-2.5-Mistral-7B --- # NeuralPipe-7B-slerp NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218) * [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) ## 🧩 Configuration ```yaml slices: - sources: - model: OpenPipe/mistral-ft-optimized-1218 layer_range: [0, 32] - model: mlabonne/NeuralHermes-2.5-Mistral-7B layer_range: [0, 32] merge_method: slerp base_model: OpenPipe/mistral-ft-optimized-1218 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "dictatee/NeuralPipe-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
XinHun/YD_JQS
XinHun
2024-02-03T02:22:24Z
0
1
null
[ "license:other", "region:us" ]
null
2024-02-03T02:20:35Z
--- license: other license_name: '1' license_link: LICENSE ---
TMOU715/NeuralPipe-7B-slerp
TMOU715
2024-02-03T02:11:24Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "OpenPipe/mistral-ft-optimized-1218", "mlabonne/NeuralHermes-2.5-Mistral-7B", "base_model:OpenPipe/mistral-ft-optimized-1218", "base_model:merge:OpenPipe/mistral-ft-optimized-1218", "base_model:mlabonne/NeuralHermes-2.5-Mistral-7B", "base_model:merge:mlabonne/NeuralHermes-2.5-Mistral-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-03T02:07:19Z
--- tags: - merge - mergekit - lazymergekit - OpenPipe/mistral-ft-optimized-1218 - mlabonne/NeuralHermes-2.5-Mistral-7B base_model: - OpenPipe/mistral-ft-optimized-1218 - mlabonne/NeuralHermes-2.5-Mistral-7B --- # NeuralPipe-7B-slerp NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218) * [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) ## 🧩 Configuration ```yaml slices: - sources: - model: OpenPipe/mistral-ft-optimized-1218 layer_range: [0, 32] - model: mlabonne/NeuralHermes-2.5-Mistral-7B layer_range: [0, 32] merge_method: slerp base_model: OpenPipe/mistral-ft-optimized-1218 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "TMOU715/NeuralPipe-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
saishf/West-Maid-7B-GGUF
saishf
2024-02-03T02:06:33Z
1
0
null
[ "gguf", "mergekit", "merge", "base_model:NeverSleep/Noromaid-7B-0.4-DPO", "base_model:merge:NeverSleep/Noromaid-7B-0.4-DPO", "base_model:senseable/WestLake-7B-v2", "base_model:merge:senseable/WestLake-7B-v2", "endpoints_compatible", "region:us" ]
null
2024-02-03T01:49:38Z
--- base_model: - senseable/WestLake-7B-v2 - NeverSleep/Noromaid-7B-0.4-DPO tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2) * [NeverSleep/Noromaid-7B-0.4-DPO](https://huggingface.co/NeverSleep/Noromaid-7B-0.4-DPO) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: senseable/WestLake-7B-v2 layer_range: [0, 32] - model: NeverSleep/Noromaid-7B-0.4-DPO layer_range: [0, 32] merge_method: slerp base_model: senseable/WestLake-7B-v2 parameters: t: - filter: self_attn value: [0.6, 0.7, 0.8, 0.9, 1] - filter: mlp value: [0.4, 0.3, 0.2, 0.1, 0] - value: 0.5 dtype: bfloat16 ```
CLMBR/re-irr-sv-agr-transformer-3
CLMBR
2024-02-03T01:56:56Z
1
0
transformers
[ "transformers", "pytorch", "opt", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-25T08:50:50Z
--- tags: - generated_from_trainer model-index: - name: re-irr-sv-agr-transformer-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # re-irr-sv-agr-transformer-3 This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.8879 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 3 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 3052726 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-------:|:---------------:| | 4.2136 | 0.03 | 76320 | 4.2085 | | 4.0097 | 0.03 | 152640 | 4.0416 | | 3.9022 | 1.03 | 228960 | 3.9680 | | 3.8365 | 0.03 | 305280 | 3.9277 | | 3.7852 | 1.03 | 381600 | 3.9033 | | 3.7453 | 0.03 | 457920 | 3.8884 | | 3.7125 | 0.03 | 534240 | 3.8780 | | 3.6804 | 1.03 | 610560 | 3.8712 | | 3.6515 | 0.03 | 686880 | 3.8676 | | 3.6286 | 1.03 | 763200 | 3.8659 | | 3.6017 | 0.03 | 839520 | 3.8648 | | 3.5874 | 1.03 | 915840 | 3.8655 | | 3.5701 | 0.03 | 992160 | 3.8646 | | 3.5504 | 1.03 | 1068480 | 3.8655 | | 3.5327 | 0.03 | 1144800 | 3.8667 | | 3.5158 | 0.03 | 1221120 | 3.8685 | | 3.4975 | 1.03 | 1297440 | 3.8692 | | 3.484 | 0.03 | 1373760 | 3.8711 | | 3.4713 | 1.03 | 1450080 | 3.8721 | | 3.4645 | 0.03 | 1526400 | 3.8742 | | 3.4551 | 1.03 | 1602720 | 3.8754 | | 3.4478 | 0.03 | 1679040 | 3.8768 | | 3.4378 | 1.03 | 1755360 | 3.8791 | | 3.4272 | 0.03 | 1831680 | 3.8814 | | 3.4131 | 1.03 | 1908000 | 3.8817 | | 3.4025 | 0.03 | 1984320 | 3.8825 | | 3.3882 | 1.03 | 2060640 | 3.8843 | | 3.3847 | 0.03 | 2136960 | 3.8852 | | 3.3755 | 1.03 | 2213280 | 3.8870 | | 3.3601 | 0.03 | 2289600 | 3.8887 | | 3.3496 | 1.03 | 2365920 | 3.8886 | | 3.3372 | 0.03 | 2442240 | 3.8893 | | 3.325 | 1.03 | 2518560 | 3.8901 | | 3.314 | 0.03 | 2594880 | 3.8912 | | 3.3058 | 1.03 | 2671200 | 3.8906 | | 3.3012 | 0.03 | 2747520 | 3.8909 | | 3.2952 | 0.03 | 2823840 | 3.8904 | | 3.2923 | 0.03 | 2900160 | 3.8897 | | 3.287 | 1.03 | 2976480 | 3.8889 | | 3.28 | 0.02 | 3052726 | 3.8879 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.13.3
mlabonne/NeuralOmniBeagle-7B-v2
mlabonne
2024-02-03T01:54:31Z
12
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-03T01:51:18Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Kquant03/Mistral-7B-Instruct-v0.2-Neural-Story-GGUF
Kquant03
2024-02-03T01:33:14Z
34
0
transformers
[ "transformers", "gguf", "en", "dataset:NeuralNovel/Neural-Story-v1", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:quantized:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "region:us", "conversational" ]
null
2024-02-01T18:30:55Z
--- license: apache-2.0 base_model: mistralai/Mistral-7B-Instruct-v0.2 datasets: - NeuralNovel/Neural-Story-v1 library_name: transformers inference: false language: - en --- ![Neural-Story](https://i.ibb.co/JFRYk6g/OIG-27.jpg) # NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story [BASE MODEL HERE](https://huggingface.co/NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story) The **Mistral-7B-Instruct-v0.2-Neural-Story** model, developed by NeuralNovel and funded by Techmind, is a language model finetuned from Mistral-7B-Instruct-v0.2. Designed to generate instructive and narrative text, with a specific focus on storytelling. This fine-tune has been tailored to provide detailed and creative responses in the context of narrative and optimised for short story telling. Based on mistralAI, with apache-2.0 license, suitable for commercial or non-commercial use. ### Data-set The model was finetuned using the Neural-Story-v1 dataset. ### Benchmark | Metric | Value | |-----------------------|---------------------------| | Avg. | **64.96** | | ARC | 64.08 | | HellaSwag | **66.89** | | MMLU | 60.67 | | TruthfulQA | 66.89 | | Winogrande | **75.85** | | GSM8K | 38.29 | Evaluated on **HuggingFaceH4/open_llm_leaderboard** ### Summary Fine-tuned with the intention of generating creative and narrative text, making it more suitable for creative writing prompts and storytelling. #### Out-of-Scope Use The model may not perform well in scenarios unrelated to instructive and narrative text generation. Misuse or applications outside its designed scope may result in suboptimal outcomes. ### Bias, Risks, and Limitations The model may exhibit biases or limitations inherent in the training data. It is essential to consider these factors when deploying the model to avoid unintended consequences. While the Neural-Story-v0.1 dataset serves as an excellent starting point for testing language models, users are advised to exercise caution, as there might be some inherent genre or writing bias. ### Hardware and Training ``` n_epochs = 3, n_checkpoints = 3, batch_size = 12, learning_rate = 1e-5, ``` *Sincere appreciation to Techmind for their generous sponsorship.*
saishf/Kuro-Lotus-10.7B-GGUF
saishf
2024-02-03T01:26:16Z
67
11
null
[ "gguf", "mergekit", "merge", "base_model:BlueNipples/SnowLotus-v2-10.7B", "base_model:merge:BlueNipples/SnowLotus-v2-10.7B", "base_model:Himitsui/KuroMitsu-11B", "base_model:merge:Himitsui/KuroMitsu-11B", "endpoints_compatible", "region:us" ]
null
2024-02-03T01:04:30Z
--- base_model: - BlueNipples/SnowLotus-v2-10.7B - Himitsui/KuroMitsu-11B tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [BlueNipples/SnowLotus-v2-10.7B](https://huggingface.co/BlueNipples/SnowLotus-v2-10.7B) * [Himitsui/KuroMitsu-11B](https://huggingface.co/Himitsui/KuroMitsu-11B) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: Himitsui/KuroMitsu-11B layer_range: [0, 48] - model: BlueNipples/SnowLotus-v2-10.7B layer_range: [0, 48] merge_method: slerp base_model: Himitsui/KuroMitsu-11B parameters: t: - filter: self_attn value: [0.6, 0.7, 0.8, 0.9, 1] - filter: mlp value: [0.4, 0.3, 0.2, 0.1, 0] - value: 0.5 dtype: bfloat16 ```
chrisvoncsefalvay/vaers-custom-tokenizer
chrisvoncsefalvay
2024-02-03T01:25:34Z
0
0
transformers
[ "transformers", "token-classification", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
token-classification
2024-02-02T16:10:07Z
--- library_name: transformers pipeline_tag: token-classification --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
r3m3c3/english-to-kanji-c42000_model_3_v_0
r3m3c3
2024-02-03T01:21:00Z
2
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-02-03T01:19:56Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
r3m3c3/english-to-kanji-c36500_model_3_v_0
r3m3c3
2024-02-03T01:18:55Z
3
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-02-03T01:17:36Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gretelai/mpt-7b
gretelai
2024-02-03T01:09:32Z
38
5
transformers
[ "transformers", "pytorch", "mpt", "text-generation", "Composer", "MosaicML", "llm-foundry", "StreamingDatasets", "custom_code", "dataset:mc4", "dataset:c4", "dataset:togethercomputer/RedPajama-Data-1T", "dataset:bigcode/the-stack", "dataset:allenai/s2orc", "arxiv:2108.12409", "arxiv:2302.13971", "arxiv:2205.14135", "arxiv:2010.04245", "arxiv:1909.08053", "arxiv:2302.06675", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-05-22T17:57:15Z
--- license: apache-2.0 tags: - Composer - MosaicML - llm-foundry - StreamingDatasets datasets: - mc4 - c4 - togethercomputer/RedPajama-Data-1T - bigcode/the-stack - allenai/s2orc inference: false --- # MPT-7B MPT-7B is a decoder-style transformer pretrained from scratch on 1T tokens of English text and code. This model was trained by [MosaicML](https://www.mosaicml.com). MPT-7B is part of the family of MosaicPretrainedTransformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference. These architectural changes include performance-optimized layer implementations and the elimination of context length limits by replacing positional embeddings with Attention with Linear Biases ([ALiBi](https://arxiv.org/abs/2108.12409)). Thanks to these modifications, MPT models can be trained with high throughput efficiency and stable convergence. MPT models can also be served efficiently with both standard HuggingFace pipelines and NVIDIA's [FasterTransformer](https://github.com/NVIDIA/FasterTransformer). This model uses the MosaicML LLM codebase, which can be found in the [llm-foundry repository](https://github.com/mosaicml/llm-foundry). It was trained by MosaicML’s NLP team on the [MosaicML platform](https://www.mosaicml.com/training) for LLM pretraining, finetuning, and inference. ### How is this model different? MPT-7B is * **Licensed for the possibility of commercial use** (unlike [LLaMA](https://arxiv.org/abs/2302.13971)). * **Trained on a large amount of data** (1T tokens like [LLaMA](https://arxiv.org/abs/2302.13971) vs. 300B for [Pythia](https://github.com/EleutherAI/pythia), 300B for [OpenLLaMA](https://github.com/openlm-research/open_llama), and 800B for [StableLM](https://github.com/Stability-AI/StableLM)). * **Prepared to handle extremely long inputs** thanks to [ALiBi](https://arxiv.org/abs/2108.12409) (we finetuned [MPT-7B-StoryWriter-65k+](https://huggingface.co/mosaicml/mpt-7b-storywriter) on up to 65k inputs and can handle up to 84k vs. 2k-4k for other open source models). * **Capable of fast training and inference** (via [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) and [FasterTransformer](https://github.com/NVIDIA/FasterTransformer)) * **Equipped with highly efficient open-source training code** via the [llm-foundry repository](https://github.com/mosaicml/llm-foundry) ### Models finetuned off MPT-7B: The following models are finetuned on MPT-7B: * [MPT-7B-StoryWriter-65k+](https://huggingface.co/mosaicml/mpt-7b-storywriter): a model designed to read and write fictional stories with super long context lengths. Built by finetuning MPT-7B with a context length of 65k tokens on a filtered fiction subset of the [books3 dataset](https://huggingface.co/datasets/the_pile_books3). At inference time, thanks to [ALiBi](https://arxiv.org/abs/2108.12409), MPT-7B-StoryWriter-65k+ can extrapolate even beyond 65k tokens. We demonstrate generations as long as 80k tokens on a single A100-80GB GPU in our [blogpost](www.mosaicml.com/blog/mpt-7b). * License: Apache 2.0 * [MPT-7B-Instruct](https://huggingface.co/mosaicml/mpt-7b-instruct): a model for short-form instruction following. Built by finetuning MPT-7B on a [dataset](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) we also release, derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets. * License: Apache 2.0 * [MPT-7B-Chat](https://huggingface.co/mosaicml/mpt-7b-chat): a chatbot-like model for dialogue generation. Built by finetuning MPT-7B on the [ShareGPT-Vicuna](https://huggingface.co/datasets/jeffwan/sharegpt_vicuna), [HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3), [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca), [HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf), and [Evol-Instruct](https://huggingface.co/datasets/victor123/evol_instruct_70k) datasets. * License: _CC-By-NC-SA-4.0_ ## Model Date May 5, 2023 ## Model License Apache-2.0 ## Documentation * [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b) * [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/) * Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)! ## How to Use This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning. ```python import transformers model = transformers.AutoModelForCausalLM.from_pretrained( 'mosaicml/mpt-7b', trust_remote_code=True ) ``` Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package. `MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more. To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision: ```python import torch import transformers name = 'mosaicml/mpt-7b' config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True) config.attn_config['attn_impl'] = 'triton' config.init_device = 'cuda:0' # For fast initialization directly on GPU! model = transformers.AutoModelForCausalLM.from_pretrained( name, config=config, torch_dtype=torch.bfloat16, # Load model weights in bfloat16 trust_remote_code=True ) ``` Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example: ```python import transformers name = 'mosaicml/mpt-7b' config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True) config.max_seq_len = 4096 # (input + output) tokens can now be up to 4096 model = transformers.AutoModelForCausalLM.from_pretrained( name, config=config, trust_remote_code=True ) ``` This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('EleutherAI/gpt-neox-20b') ``` The model can then be used, for example, within a text-generation pipeline. Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html). ```python from transformers import pipeline pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0') with torch.autocast('cuda', dtype=torch.bfloat16): print( pipe('Here is a recipe for vegan banana bread:\n', max_new_tokens=100, do_sample=True, use_cache=True)) ``` ## Model Description The architecture is a modification of a standard decoder-only transformer. The model has been modified from a standard transformer in the following ways: * It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) * It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings * It does not use biases | Hyperparameter | Value | |----------------|-------| |n_parameters | 6.7B | |n_layers | 32 | | n_heads | 32 | | d_model | 4096 | | vocab size | 50432 | | sequence length | 2048 | ## Training Data ### Streaming Datasets Data was formatted using the MosaicML [StreamingDataset](https://github.com/mosaicml/streaming) library to host our data in object storage and efficiently stream it to our compute cluster during training. StreamingDataset obviates the need to download the whole dataset before starting training, and allows instant resumption of training from any point in the dataset. ### Data Mix The model was trained for 1T tokens (with batch size 1760 and sequence length 2048). It was trained on the following data mix: | Data Source | Number of Tokens in Source | Proportion | Effective Number of Tokens | Epochs | |-------------|----------------------------|------------|----------------------------|--------| | mC4 3.1.0 - English | 417.99 B | 0.33 | 330 B | 0.14 | | C4 - English - SemDedup 80% | 100.42 B | 0.299 | 299 B | 2.98 | | RedPajama - CommonCrawl | 878.45 B | 0.1 | 100 B | 0.11 | | The Stack - Selected Languages | 463.78 B | 0.1 | 100 B | 0.22 | | RedPajama - Wikipedia - En | 4.87 B | 0.04 | 40 B | 8.21 | | The Stack - Markdown | 107.07 B | 0.035 | 35 B | 0.33 | | S2ORC | 48.85 B | 0.033 | 33 B | 0.68 | | RedPajama - Books | 26.02 B | 0.03 | 30B | 1.15 | | RedPajama - arXiv | 28.10 B | 0.019 | 19 B | 0.68 | | RedPajama - StackExchange | 20.54 B | 0.014 | 14 B |0.68 | Samples for each batch were selected from one of the datasets with the probability specified above. The examples were shuffled within each dataset, and each example was constructed from as many sequences from that dataset as were necessary to fill the 2048 sequence length. The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. This BPE tokenizer has a number of desirable characteristics, most of which are relevant for tokenizing code: (1) It was trained on a diverse mix of data that includes code (The Pile) (2) It applies consistent space delimitation, unlike the GPT2 tokenizer which tokenizes inconsistently depending on the presence of prefix spaces (3) It contains tokens for repeated space characters, which allows superior compression of text with large amounts of repeated space characters. The model vocabulary size of 50432 was set to be a multiple of 128 (as in [MEGATRON-LM](https://arxiv.org/abs/1909.08053)), model flop utilization (MFU) increased by up to four percentage points. ### Training Configuration This model was trained on 440 A100-40GBs for about 9.5 days using the [MosaicML Platform](https://www.mosaicml.com/platform). The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer. ## Limitations and Biases _The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_ MPT-7B (Base) is **not** intended for deployment without finetuning. It should not be used for human-facing interactions without further guardrails and user consent. MPT-7B can produce factually incorrect output, and should not be relied on to produce factually accurate information. MPT-7B was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs. ## MosaicML Platform If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b). ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes. ## Citation Please cite this model using the following format: ``` @online{MosaicML2023Introducing, author = {MosaicML NLP Team}, title = {Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs}, year = {2023}, url = {www.mosaicml.com/blog/mpt-7b}, note = {Accessed: 2023-05-05}, urldate = {2023-05-05} } ```
r3m3c3/english-to-kanji-c23000_model_3_v_0
r3m3c3
2024-02-03T01:04:24Z
0
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-02-03T01:03:05Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
karawalla/aqmodel_20240203
karawalla
2024-02-03T01:02:25Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-03T01:02:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
r3m3c3/english-to-kanji-c20000_model_3_v_0
r3m3c3
2024-02-03T01:01:22Z
0
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-02-03T01:00:15Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
r3m3c3/english-to-kanji-c18000_model_3_v_0
r3m3c3
2024-02-03T00:58:26Z
0
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-02-03T00:57:19Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
r3m3c3/english-to-kanji-c14500_model_3_v_0
r3m3c3
2024-02-03T00:54:49Z
0
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-02-03T00:53:26Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
vinluvie/clip-vit-large-patch14-finetuned
vinluvie
2024-02-03T00:48:20Z
71
0
transformers
[ "transformers", "safetensors", "clip", "zero-shot-image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:openai/clip-vit-large-patch14", "base_model:finetune:openai/clip-vit-large-patch14", "endpoints_compatible", "region:us" ]
zero-shot-image-classification
2024-02-02T20:15:06Z
--- base_model: openai/clip-vit-large-patch14 tags: - generated_from_trainer datasets: - imagefolder model-index: - name: clip-vit-large-patch14-finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # clip-vit-large-patch14-finetuned This model is a fine-tuned version of [openai/clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7755 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.38.0.dev0 - Pytorch 2.2.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
CLMBR/rel-cl-transformer-2
CLMBR
2024-02-03T00:32:20Z
1
0
transformers
[ "transformers", "pytorch", "opt", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-26T10:20:17Z
--- tags: - generated_from_trainer model-index: - name: rel-cl2-transformer-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # rel-cl2-transformer-2 This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.8732 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 3052726 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-------:|:---------------:| | 4.2351 | 0.03 | 76320 | 4.2024 | | 4.0288 | 1.03 | 152640 | 4.0335 | | 3.9259 | 0.03 | 228960 | 3.9588 | | 3.8543 | 1.03 | 305280 | 3.9178 | | 3.804 | 0.03 | 381600 | 3.8922 | | 3.7641 | 1.03 | 457920 | 3.8757 | | 3.7273 | 0.03 | 534240 | 3.8649 | | 3.6967 | 1.03 | 610560 | 3.8587 | | 3.6658 | 0.03 | 686880 | 3.8543 | | 3.6407 | 1.03 | 763200 | 3.8511 | | 3.614 | 0.03 | 839520 | 3.8498 | | 3.5939 | 1.03 | 915840 | 3.8499 | | 3.5759 | 0.03 | 992160 | 3.8488 | | 3.5578 | 1.03 | 1068480 | 3.8506 | | 3.5451 | 0.03 | 1144800 | 3.8510 | | 3.534 | 1.03 | 1221120 | 3.8518 | | 3.5188 | 0.03 | 1297440 | 3.8544 | | 3.5058 | 1.03 | 1373760 | 3.8540 | | 3.4925 | 0.03 | 1450080 | 3.8565 | | 3.4832 | 1.03 | 1526400 | 3.8572 | | 3.4735 | 0.03 | 1602720 | 3.8599 | | 3.4643 | 1.03 | 1679040 | 3.8618 | | 3.4536 | 0.03 | 1755360 | 3.8628 | | 3.4408 | 1.03 | 1831680 | 3.8638 | | 3.4261 | 0.03 | 1908000 | 3.8659 | | 3.4152 | 1.03 | 1984320 | 3.8671 | | 3.4012 | 0.03 | 2060640 | 3.8685 | | 3.3916 | 0.03 | 2136960 | 3.8690 | | 3.3778 | 1.03 | 2213280 | 3.8713 | | 3.3672 | 0.03 | 2289600 | 3.8723 | | 3.3592 | 1.03 | 2365920 | 3.8732 | | 3.3547 | 0.03 | 2442240 | 3.8739 | | 3.3438 | 1.03 | 2518560 | 3.8749 | | 3.335 | 0.03 | 2594880 | 3.8767 | | 3.3251 | 1.03 | 2671200 | 3.8758 | | 3.3196 | 0.03 | 2747520 | 3.8764 | | 3.3133 | 1.03 | 2823840 | 3.8760 | | 3.3082 | 0.03 | 2900160 | 3.8752 | | 3.299 | 0.03 | 2976480 | 3.8748 | | 3.2919 | 1.02 | 3052726 | 3.8732 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.13.3
oshizo/japanese-e5-mistral-1.9b
oshizo
2024-02-03T00:28:28Z
10
2
transformers
[ "transformers", "safetensors", "mistral", "ja", "dataset:unicamp-dl/mmarco", "dataset:shunk031/jsnli", "license:mit", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2024-02-02T12:39:10Z
--- license: mit datasets: - unicamp-dl/mmarco - shunk031/jsnli language: - ja --- Model trained on 800,000 Japanese sentences after reducing [oshizo/japanese-e5-mistral-7b_slerp](https://huggingface.co/oshizo/japanese-e5-mistral-7b_slerp) to 8 layers. See this article for details(Japanese) https://note.com/oshizo/n/n9140df790315 See [intfloat/e5-mistral-7b-instruct page](https://huggingface.co/intfloat/e5-mistral-7b-instruct#usage) for model usage.
TedTansley/ppo-LunarLander-v2
TedTansley
2024-02-03T00:27:44Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-02-03T00:27:27Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 254.94 +/- 19.12 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
daila/wav2vec2-large-xls-r-300m-vi-colab
daila
2024-02-03T00:21:35Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_16_1", "base_model:daila/wav2vec2-large-xls-r-300m-vi-colab", "base_model:finetune:daila/wav2vec2-large-xls-r-300m-vi-colab", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-02-02T10:20:44Z
--- base_model: daila/wav2vec2-large-xls-r-300m-vi-colab tags: - generated_from_trainer datasets: - common_voice_16_1 metrics: - wer model-index: - name: wav2vec2-large-xls-r-300m-vi-colab results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: common_voice_16_1 type: common_voice_16_1 config: vi split: test args: vi metrics: - name: Wer type: wer value: 0.5894672631150875 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-vi-colab This model is a fine-tuned version of [daila/wav2vec2-large-xls-r-300m-vi-colab](https://huggingface.co/daila/wav2vec2-large-xls-r-300m-vi-colab) on the common_voice_16_1 dataset. It achieves the following results on the evaluation set: - Loss: 1.6432 - Wer: 0.5895 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.0916 | 4.52 | 400 | 1.5440 | 0.6357 | | 0.1344 | 9.04 | 800 | 1.6043 | 0.6543 | | 0.0926 | 13.56 | 1200 | 1.7226 | 0.6365 | | 0.0703 | 18.08 | 1600 | 1.5989 | 0.6048 | | 0.0557 | 22.6 | 2000 | 1.6714 | 0.6001 | | 0.051 | 27.12 | 2400 | 1.6432 | 0.5895 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
Omar95farag/2024-01-31_one_stage_subgraphs_weighted_txt_vis_conc_all_ramp-g0.7
Omar95farag
2024-02-03T00:05:42Z
2
0
transformers
[ "transformers", "pytorch", "layoutlmv3", "text-classification", "generated_from_trainer", "base_model:microsoft/layoutlmv3-base", "base_model:finetune:microsoft/layoutlmv3-base", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-02T23:25:27Z
--- license: cc-by-nc-sa-4.0 base_model: microsoft/layoutlmv3-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: 2024-01-31_one_stage_subgraphs_weighted_txt_vis_conc_all_ramp results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 2024-01-31_one_stage_subgraphs_weighted_txt_vis_conc_all_ramp This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3452 - Accuracy: 0.7625 - Exit 0 Accuracy: 0.29 - Exit 1 Accuracy: 0.4625 - Exit 2 Accuracy: 0.5225 - Exit 3 Accuracy: 0.585 - Exit 4 Accuracy: 0.625 - Exit 5 Accuracy: 0.695 - Exit 6 Accuracy: 0.71 - Exit 7 Accuracy: 0.73 - Exit 8 Accuracy: 0.73 - Exit 9 Accuracy: 0.7575 - Exit 10 Accuracy: 0.76 - Exit 11 Accuracy: 0.7575 - Exit 12 Accuracy: 0.7625 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 24 - total_train_batch_size: 48 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 60 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy | Exit 2 Accuracy | Exit 3 Accuracy | Exit 4 Accuracy | Exit 5 Accuracy | Exit 6 Accuracy | Exit 7 Accuracy | Exit 8 Accuracy | Exit 9 Accuracy | Exit 10 Accuracy | Exit 11 Accuracy | Exit 12 Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:|:----------------:|:----------------:|:----------------:| | No log | 0.96 | 16 | 2.6725 | 0.165 | 0.0975 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0925 | | No log | 1.98 | 33 | 2.4536 | 0.2625 | 0.125 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0675 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.1425 | | No log | 3.0 | 50 | 2.1927 | 0.3825 | 0.14 | 0.1025 | 0.0625 | 0.0625 | 0.0625 | 0.125 | 0.0975 | 0.0625 | 0.0625 | 0.0625 | 0.065 | 0.0625 | 0.2325 | | No log | 3.96 | 66 | 1.9488 | 0.45 | 0.1575 | 0.1025 | 0.0625 | 0.0625 | 0.0625 | 0.1275 | 0.13 | 0.0625 | 0.0625 | 0.0625 | 0.0575 | 0.0625 | 0.335 | | No log | 4.98 | 83 | 1.6922 | 0.54 | 0.16 | 0.0975 | 0.0625 | 0.0625 | 0.0775 | 0.24 | 0.1975 | 0.12 | 0.0625 | 0.0625 | 0.065 | 0.0625 | 0.4175 | | No log | 6.0 | 100 | 1.4713 | 0.615 | 0.16 | 0.115 | 0.0625 | 0.0625 | 0.1375 | 0.3575 | 0.315 | 0.2125 | 0.1075 | 0.0625 | 0.075 | 0.0625 | 0.4875 | | No log | 6.96 | 116 | 1.2943 | 0.6625 | 0.165 | 0.1175 | 0.0625 | 0.0675 | 0.1525 | 0.3425 | 0.3675 | 0.38 | 0.105 | 0.0625 | 0.13 | 0.095 | 0.565 | | No log | 7.98 | 133 | 1.1118 | 0.7225 | 0.175 | 0.1275 | 0.0625 | 0.0675 | 0.1425 | 0.36 | 0.3925 | 0.4575 | 0.1075 | 0.0625 | 0.16 | 0.1925 | 0.6375 | | No log | 9.0 | 150 | 1.0188 | 0.74 | 0.1725 | 0.1225 | 0.0625 | 0.1025 | 0.185 | 0.4075 | 0.4475 | 0.535 | 0.0925 | 0.0625 | 0.1025 | 0.2925 | 0.6825 | | No log | 9.96 | 166 | 0.9689 | 0.7375 | 0.17 | 0.125 | 0.0625 | 0.1375 | 0.165 | 0.4675 | 0.475 | 0.5825 | 0.0975 | 0.115 | 0.2075 | 0.35 | 0.6975 | | No log | 10.98 | 183 | 0.8788 | 0.77 | 0.1775 | 0.16 | 0.0625 | 0.1125 | 0.2025 | 0.4975 | 0.49 | 0.5975 | 0.1225 | 0.1275 | 0.2525 | 0.4025 | 0.75 | | No log | 12.0 | 200 | 0.9443 | 0.7125 | 0.1725 | 0.165 | 0.0625 | 0.14 | 0.195 | 0.515 | 0.505 | 0.59 | 0.1525 | 0.17 | 0.2725 | 0.4 | 0.705 | | No log | 12.96 | 216 | 0.8964 | 0.7375 | 0.185 | 0.2025 | 0.0675 | 0.2225 | 0.21 | 0.505 | 0.51 | 0.6025 | 0.215 | 0.1325 | 0.405 | 0.4475 | 0.7375 | | No log | 13.98 | 233 | 0.9871 | 0.73 | 0.19 | 0.23 | 0.065 | 0.21 | 0.23 | 0.4625 | 0.505 | 0.6175 | 0.31 | 0.14 | 0.46 | 0.57 | 0.72 | | No log | 15.0 | 250 | 1.0090 | 0.73 | 0.2175 | 0.265 | 0.115 | 0.225 | 0.2075 | 0.47 | 0.525 | 0.64 | 0.4625 | 0.3525 | 0.485 | 0.63 | 0.7375 | | No log | 15.96 | 266 | 0.9539 | 0.76 | 0.2175 | 0.2525 | 0.13 | 0.185 | 0.35 | 0.3975 | 0.555 | 0.6375 | 0.46 | 0.435 | 0.46 | 0.6875 | 0.7575 | | No log | 16.98 | 283 | 0.9204 | 0.7625 | 0.2225 | 0.295 | 0.1375 | 0.27 | 0.3925 | 0.5075 | 0.595 | 0.65 | 0.5625 | 0.475 | 0.5275 | 0.6825 | 0.765 | | No log | 18.0 | 300 | 0.9639 | 0.77 | 0.2375 | 0.2675 | 0.17 | 0.2625 | 0.455 | 0.515 | 0.6175 | 0.685 | 0.5775 | 0.6525 | 0.63 | 0.7125 | 0.765 | | No log | 18.96 | 316 | 0.9644 | 0.7725 | 0.25 | 0.3175 | 0.175 | 0.3025 | 0.4975 | 0.5375 | 0.65 | 0.695 | 0.65 | 0.66 | 0.6225 | 0.745 | 0.7675 | | No log | 19.98 | 333 | 0.9984 | 0.7675 | 0.25 | 0.3275 | 0.1975 | 0.295 | 0.5375 | 0.58 | 0.6775 | 0.7 | 0.6825 | 0.65 | 0.6875 | 0.7425 | 0.7625 | | No log | 21.0 | 350 | 0.9756 | 0.775 | 0.24 | 0.3025 | 0.22 | 0.2825 | 0.52 | 0.59 | 0.6725 | 0.7125 | 0.6875 | 0.6775 | 0.6975 | 0.7525 | 0.7825 | | No log | 21.96 | 366 | 1.0060 | 0.7675 | 0.235 | 0.2525 | 0.255 | 0.2975 | 0.545 | 0.61 | 0.675 | 0.7125 | 0.6925 | 0.7 | 0.695 | 0.75 | 0.7675 | | No log | 22.98 | 383 | 1.0393 | 0.7675 | 0.245 | 0.265 | 0.2175 | 0.3125 | 0.53 | 0.62 | 0.69 | 0.7125 | 0.7275 | 0.7125 | 0.7 | 0.76 | 0.7675 | | No log | 24.0 | 400 | 1.0382 | 0.77 | 0.2475 | 0.29 | 0.2475 | 0.33 | 0.5525 | 0.66 | 0.7 | 0.72 | 0.755 | 0.735 | 0.7125 | 0.77 | 0.7625 | | No log | 24.96 | 416 | 1.0630 | 0.76 | 0.255 | 0.2525 | 0.23 | 0.3675 | 0.5325 | 0.6225 | 0.685 | 0.7125 | 0.75 | 0.7275 | 0.7275 | 0.7675 | 0.7625 | | No log | 25.98 | 433 | 1.0887 | 0.7625 | 0.26 | 0.2825 | 0.2425 | 0.3775 | 0.5325 | 0.6575 | 0.7025 | 0.705 | 0.7525 | 0.76 | 0.755 | 0.775 | 0.765 | | No log | 27.0 | 450 | 1.1224 | 0.7675 | 0.255 | 0.3125 | 0.2425 | 0.3875 | 0.5275 | 0.665 | 0.7 | 0.7125 | 0.7475 | 0.7675 | 0.75 | 0.7625 | 0.7675 | | No log | 27.96 | 466 | 1.1230 | 0.7625 | 0.275 | 0.3675 | 0.2775 | 0.3825 | 0.5525 | 0.67 | 0.6875 | 0.7075 | 0.7425 | 0.7475 | 0.745 | 0.7675 | 0.7575 | | No log | 28.98 | 483 | 1.1384 | 0.7525 | 0.2625 | 0.375 | 0.3075 | 0.38 | 0.5375 | 0.67 | 0.7 | 0.7325 | 0.745 | 0.7525 | 0.7525 | 0.75 | 0.75 | | 0.3128 | 30.0 | 500 | 1.1192 | 0.76 | 0.285 | 0.42 | 0.415 | 0.4425 | 0.585 | 0.6825 | 0.725 | 0.7375 | 0.755 | 0.7725 | 0.7625 | 0.765 | 0.76 | | 0.3128 | 30.96 | 516 | 1.1687 | 0.7625 | 0.27 | 0.3775 | 0.335 | 0.3875 | 0.5675 | 0.665 | 0.685 | 0.725 | 0.7375 | 0.7475 | 0.7525 | 0.7575 | 0.76 | | 0.3128 | 31.98 | 533 | 1.2018 | 0.755 | 0.2625 | 0.37 | 0.3325 | 0.385 | 0.5575 | 0.6625 | 0.69 | 0.73 | 0.7475 | 0.76 | 0.7575 | 0.76 | 0.75 | | 0.3128 | 33.0 | 550 | 1.1723 | 0.7725 | 0.265 | 0.355 | 0.3425 | 0.4 | 0.575 | 0.65 | 0.685 | 0.715 | 0.745 | 0.76 | 0.77 | 0.77 | 0.775 | | 0.3128 | 33.96 | 566 | 1.2252 | 0.7475 | 0.28 | 0.4175 | 0.4325 | 0.4675 | 0.5775 | 0.67 | 0.7025 | 0.715 | 0.7475 | 0.7525 | 0.76 | 0.755 | 0.745 | | 0.3128 | 34.98 | 583 | 1.1831 | 0.765 | 0.29 | 0.4375 | 0.435 | 0.4825 | 0.5925 | 0.68 | 0.7175 | 0.735 | 0.75 | 0.7575 | 0.765 | 0.765 | 0.77 | | 0.3128 | 36.0 | 600 | 1.2292 | 0.755 | 0.28 | 0.4375 | 0.4275 | 0.4875 | 0.5875 | 0.6725 | 0.7 | 0.7275 | 0.745 | 0.7475 | 0.745 | 0.75 | 0.7525 | | 0.3128 | 36.96 | 616 | 1.2460 | 0.755 | 0.2825 | 0.425 | 0.435 | 0.5125 | 0.59 | 0.6775 | 0.7075 | 0.73 | 0.745 | 0.7525 | 0.755 | 0.755 | 0.755 | | 0.3128 | 37.98 | 633 | 1.2560 | 0.7525 | 0.2675 | 0.4525 | 0.46 | 0.5175 | 0.5875 | 0.6675 | 0.705 | 0.7225 | 0.74 | 0.745 | 0.745 | 0.7475 | 0.7525 | | 0.3128 | 39.0 | 650 | 1.2463 | 0.77 | 0.2825 | 0.45 | 0.475 | 0.5225 | 0.59 | 0.67 | 0.6975 | 0.73 | 0.7425 | 0.76 | 0.7625 | 0.76 | 0.765 | | 0.3128 | 39.96 | 666 | 1.2493 | 0.765 | 0.2775 | 0.455 | 0.49 | 0.5325 | 0.6 | 0.6825 | 0.7225 | 0.7425 | 0.75 | 0.765 | 0.76 | 0.765 | 0.7625 | | 0.3128 | 40.98 | 683 | 1.2727 | 0.7625 | 0.275 | 0.47 | 0.49 | 0.535 | 0.61 | 0.68 | 0.7 | 0.7275 | 0.74 | 0.7525 | 0.76 | 0.76 | 0.7575 | | 0.3128 | 42.0 | 700 | 1.2951 | 0.7525 | 0.2725 | 0.445 | 0.495 | 0.5525 | 0.5975 | 0.67 | 0.6975 | 0.735 | 0.7575 | 0.75 | 0.7575 | 0.76 | 0.75 | | 0.3128 | 42.96 | 716 | 1.2865 | 0.75 | 0.275 | 0.455 | 0.5075 | 0.5525 | 0.6025 | 0.695 | 0.71 | 0.73 | 0.745 | 0.7475 | 0.7575 | 0.755 | 0.75 | | 0.3128 | 43.98 | 733 | 1.2864 | 0.76 | 0.2775 | 0.465 | 0.5025 | 0.5575 | 0.6075 | 0.6925 | 0.6975 | 0.73 | 0.7375 | 0.7575 | 0.7625 | 0.755 | 0.7575 | | 0.3128 | 45.0 | 750 | 1.3615 | 0.7575 | 0.285 | 0.465 | 0.5075 | 0.5525 | 0.6175 | 0.6875 | 0.7075 | 0.735 | 0.73 | 0.7375 | 0.75 | 0.745 | 0.7525 | | 0.3128 | 45.96 | 766 | 1.3161 | 0.7525 | 0.2825 | 0.47 | 0.5125 | 0.5575 | 0.62 | 0.6825 | 0.6975 | 0.7275 | 0.735 | 0.7525 | 0.755 | 0.755 | 0.7525 | | 0.3128 | 46.98 | 783 | 1.3508 | 0.755 | 0.29 | 0.4775 | 0.5125 | 0.5725 | 0.6175 | 0.6875 | 0.705 | 0.7175 | 0.73 | 0.7525 | 0.7575 | 0.755 | 0.7575 | | 0.3128 | 48.0 | 800 | 1.3321 | 0.76 | 0.285 | 0.47 | 0.5175 | 0.565 | 0.62 | 0.6925 | 0.7125 | 0.73 | 0.7475 | 0.755 | 0.7575 | 0.7575 | 0.7575 | | 0.3128 | 48.96 | 816 | 1.3362 | 0.7625 | 0.2825 | 0.465 | 0.515 | 0.5725 | 0.6275 | 0.69 | 0.705 | 0.74 | 0.745 | 0.7625 | 0.765 | 0.76 | 0.76 | | 0.3128 | 49.98 | 833 | 1.3070 | 0.76 | 0.2825 | 0.4725 | 0.5175 | 0.5725 | 0.62 | 0.69 | 0.71 | 0.7325 | 0.7375 | 0.7525 | 0.76 | 0.76 | 0.7625 | | 0.3128 | 51.0 | 850 | 1.3199 | 0.7575 | 0.2875 | 0.47 | 0.5125 | 0.5775 | 0.625 | 0.6875 | 0.705 | 0.7375 | 0.735 | 0.755 | 0.7675 | 0.76 | 0.7575 | | 0.3128 | 51.96 | 866 | 1.3464 | 0.755 | 0.2875 | 0.4675 | 0.515 | 0.5775 | 0.6275 | 0.685 | 0.7075 | 0.73 | 0.7325 | 0.755 | 0.76 | 0.7525 | 0.755 | | 0.3128 | 52.98 | 883 | 1.3286 | 0.7575 | 0.29 | 0.47 | 0.515 | 0.5775 | 0.6275 | 0.6925 | 0.7125 | 0.7325 | 0.7425 | 0.76 | 0.7625 | 0.76 | 0.76 | | 0.3128 | 54.0 | 900 | 1.3277 | 0.7625 | 0.2925 | 0.4625 | 0.52 | 0.5825 | 0.6275 | 0.6975 | 0.715 | 0.735 | 0.74 | 0.7575 | 0.765 | 0.7575 | 0.76 | | 0.3128 | 54.96 | 916 | 1.3274 | 0.7625 | 0.2925 | 0.4625 | 0.52 | 0.5875 | 0.6275 | 0.695 | 0.7125 | 0.7375 | 0.7375 | 0.7575 | 0.7625 | 0.7625 | 0.7625 | | 0.3128 | 55.98 | 933 | 1.3393 | 0.7625 | 0.29 | 0.4625 | 0.5225 | 0.585 | 0.625 | 0.695 | 0.71 | 0.7375 | 0.7375 | 0.755 | 0.7625 | 0.755 | 0.76 | | 0.3128 | 57.0 | 950 | 1.3453 | 0.7625 | 0.29 | 0.46 | 0.5225 | 0.585 | 0.625 | 0.695 | 0.71 | 0.73 | 0.73 | 0.7575 | 0.76 | 0.7575 | 0.7625 | | 0.3128 | 57.6 | 960 | 1.3452 | 0.7625 | 0.29 | 0.4625 | 0.5225 | 0.585 | 0.625 | 0.695 | 0.71 | 0.73 | 0.73 | 0.7575 | 0.76 | 0.7575 | 0.7625 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu117 - Datasets 2.13.1 - Tokenizers 0.13.3
thebluedays/distilbert-base-uncased-finetuned-emotion
thebluedays
2024-02-03T00:01:54Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-01T02:53:49Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.923 - name: F1 type: f1 value: 0.9229154998434255 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2219 - Accuracy: 0.923 - F1: 0.9229 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8309 | 1.0 | 250 | 0.3238 | 0.902 | 0.9010 | | 0.2527 | 2.0 | 500 | 0.2219 | 0.923 | 0.9229 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
CLMBR/re-irr-sv-agr-lstm-0
CLMBR
2024-02-02T23:55:30Z
1
0
transformers
[ "transformers", "pytorch", "rnn", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2024-01-25T10:25:54Z
--- tags: - generated_from_trainer model-index: - name: re-irr-sv-agr-lstm-0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # re-irr-sv-agr-lstm-0 This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.9871 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 3052726 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-------:|:---------------:| | 4.774 | 0.03 | 76320 | 4.7628 | | 4.4898 | 1.03 | 152640 | 4.4830 | | 4.3488 | 0.03 | 228960 | 4.3492 | | 4.2646 | 1.03 | 305280 | 4.2676 | | 4.1995 | 0.03 | 381600 | 4.2116 | | 4.1574 | 1.03 | 457920 | 4.1707 | | 4.1193 | 0.03 | 534240 | 4.1409 | | 4.0851 | 0.03 | 610560 | 4.1171 | | 4.0581 | 1.03 | 686880 | 4.0982 | | 4.0332 | 0.03 | 763200 | 4.0820 | | 4.0122 | 1.03 | 839520 | 4.0701 | | 4.0007 | 0.03 | 915840 | 4.0597 | | 3.986 | 1.03 | 992160 | 4.0507 | | 3.9678 | 0.03 | 1068480 | 4.0432 | | 3.9529 | 1.03 | 1144800 | 4.0362 | | 3.9348 | 0.03 | 1221120 | 4.0301 | | 3.922 | 0.03 | 1297440 | 4.0256 | | 3.9113 | 1.03 | 1373760 | 4.0213 | | 3.9021 | 0.03 | 1450080 | 4.0174 | | 3.8989 | 1.03 | 1526400 | 4.0139 | | 3.8936 | 0.03 | 1602720 | 4.0110 | | 3.8923 | 1.03 | 1679040 | 4.0083 | | 3.8889 | 0.03 | 1755360 | 4.0060 | | 3.8785 | 1.03 | 1831680 | 4.0036 | | 3.8726 | 0.03 | 1908000 | 4.0011 | | 3.8671 | 0.03 | 1984320 | 3.9992 | | 3.8603 | 1.03 | 2060640 | 3.9976 | | 3.8618 | 0.03 | 2136960 | 3.9963 | | 3.8576 | 1.03 | 2213280 | 3.9950 | | 3.8495 | 0.03 | 2289600 | 3.9942 | | 3.846 | 1.03 | 2365920 | 3.9930 | | 3.8371 | 2.03 | 2442240 | 3.9918 | | 3.8292 | 0.03 | 2518560 | 3.9914 | | 3.8253 | 1.03 | 2594880 | 3.9905 | | 3.8211 | 0.03 | 2671200 | 3.9897 | | 3.823 | 1.03 | 2747520 | 3.9888 | | 3.8256 | 0.03 | 2823840 | 3.9882 | | 3.8285 | 1.03 | 2900160 | 3.9876 | | 3.8292 | 0.03 | 2976480 | 3.9874 | | 3.8239 | 1.02 | 3052726 | 3.9871 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.13.3
microsoft/falcon-7B-onnx
microsoft
2024-02-02T23:43:40Z
0
0
null
[ "onnx", "falcon-7b", "falcon", "onnxruntime", "llm", "en", "base_model:tiiuae/falcon-7b", "base_model:quantized:tiiuae/falcon-7b", "license:apache-2.0", "region:us" ]
null
2023-11-14T20:40:36Z
--- license: apache-2.0 base_model: tiiuae/falcon-7b language: - en tags: - falcon-7b - falcon - onnxruntime - onnx - llm --- #### This is an optimized version of the Falcon 7B model, available on this repository: https://huggingface.co/tiiuae/falcon-7b and under the license on such repository. Microsoft permits you to use, modify, redistribute and create derivatives of Microsoft's contributions to the optimized version subject to the restrictions and disclaimers of warranty and liability in license agreement. # falcon-7b for ONNX Runtime ## Introduction This repository hosts the optimized version of **falcon-7b** to accelerate inference with ONNX Runtime CUDA execution provider. See the [usage instructions](#usage-example) for how to inference this model with the ONNX files hosted in this repository. ## Model Description - **Developed by:** TIIUAE - **Model type:** Pretrained generative text model - **License:** Apache 2.0 License - **Model Description:** This is a conversion of the [falcon-7b](https://huggingface.co/tiiuae/falcon-7b) for [ONNX Runtime](https://github.com/microsoft/onnxruntime) inference with CUDA execution provider. ## Performance Comparison #### Latency for token generation Below is average latency of generating a token using a prompt of varying size using NVIDIA A100-SXM4-80GB GPU: | Prompt Length | Batch Size | PyTorch 2.1 torch.compile | ONNX Runtime CUDA | |-------------|------------|----------------|-------------------| | 32 | 1 | 53.64ms | 15.68ms | | 256 | 1 | 59.55ms | 26.05ms | | 1024 | 1 | 89.82ms | 99.05ms | | 2048 | 1 | 208.0ms | 227.0ms | | 32 | 4 | 70.8ms | 19.62ms | | 256 | 4 | 78.6ms | 81.29ms | | 1024 | 4 | 373.7ms | 369.6ms | | 2048 | 4 | N/A | 879.2ms | ## Usage Example 1. Clone onnxruntime repository. ```shell git clone https://github.com/microsoft/onnxruntime cd onnxruntime ``` 2. Install required dependencies ```shell python3 -m pip install -r onnxruntime/python/tools/transformers/models/llama/requirements-cuda.txt ``` 5. Inference using custom model API, or use Hugging Face's ORTModelForCausalLM ```python from optimum.onnxruntime import ORTModelForCausalLM from onnxruntime import InferenceSession from transformers import AutoConfig, AutoTokenizer sess = InferenceSession("falcon-7b.onnx", providers = ["CUDAExecutionProvider"]) config = AutoConfig.from_pretrained("tiiuae/falcon-7b") model = ORTFalconForCausalLM(sess, config, use_cache = True, use_io_binding = True) tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-7b") inputs = tokenizer("Instruct: What is a fermi paradox?\nOutput:", return_tensors="pt") outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ```