modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-02 00:43:14
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 461
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-02 00:42:27
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
lex-hue/LexGPT-Beta | lex-hue | 2024-01-24T05:00:38Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"dataset:TIGER-Lab/MathInstruct",
"dataset:meta-math/MetaMathQA",
"dataset:LDJnr/Capybara",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-23T18:11:52Z | ---
inference: true
license: mit
datasets:
- TIGER-Lab/MathInstruct
- meta-math/MetaMathQA
- LDJnr/Capybara
---
# LexGPT-Beta
## Description
LexGPT-Beta is a sophisticated Large Language Model (LLM) that has been fine-tuned using Mistral-7B-Instruct-v0.2 as its base. It is specifically designed to excel in mathematical instruction and reasoning, showcasing a wide array of mathematical and logical capabilities. The model is still in its beta stage, with continuous development and testing ongoing.
## Model Information
- **Base Model**: Mistral-7B-Instruct-v0.2, a versatile LLM renowned for its proficiency in understanding and adhering to intricate instructions. This model surpasses its predecessor, Mistral-7B-Instruct-v0.1, and is widely recognized for its applicability across various AI domains.
- **Fine-tuning Datasets**:
- [TIGER-Lab/MathInstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct)
- [meta-math/MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA)
- [LDJnr/Capybara](https://huggingface.co/datasets/LDJnr/Capybara)
- **Fine-tuning Time**: Approximately 90 hours.
- **Parameters**: 7.24 billion.
- **Censored sometimes Uncensored**
## Application
LexGPT-Beta can be employed for a multitude of tasks, particularly shining in those involving mathematical reasoning and instruction comprehension. To fully exploit its potential, it's advisable to formulate prompts with concise and explicit instructions.
## Comparison to ChatGPT
Preliminary assessments suggest that LexGPT-Beta's performance is on par with ChatGPT, particularly in mathematical scenarios. However, no Humaneval has been executed to definitively compare the two.
## Development Standing
LexGPT-Beta is presently in the beta phase. Input and collaboration are encouraged to expand the model's functionalities and applications.
## Caution
As LexGPT-Beta is uncensored and in its beta phase, users are urged to exercise caution when utilizing the model. The model's outputs should be scrutinized, especially in delicate contexts.
## Credits
LexGPT-Beta is derived from Mistral-7B-Instruct-v0.2, created by the Mistral AI Team. The fine-tuning process leveraged datasets accessible on Hugging Face, significantly contributing to the model's specialized features. |
judith0/classification_INE_v2 | judith0 | 2024-01-24T04:55:53Z | 176 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:judith0/classification_INE_v1",
"base_model:finetune:judith0/classification_INE_v1",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-01-24T04:53:00Z | ---
license: apache-2.0
base_model: judith0/classification_INE_v1
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: classification_INE_v1-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# classification_INE_v1-finetuned-eurosat
This model is a fine-tuned version of [judith0/classification_INE_v1](https://huggingface.co/judith0/classification_INE_v1) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0261
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.96 | 6 | 0.0528 | 0.9884 |
| 0.1411 | 1.92 | 12 | 0.0261 | 1.0 |
| 0.1411 | 2.88 | 18 | 0.0182 | 1.0 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
RiverTest/RiverMTG24 | RiverTest | 2024-01-24T04:53:44Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-01-24T04:42:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
shuyuej/prompt_consistency_epoch_2 | shuyuej | 2024-01-24T04:45:19Z | 0 | 1 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-01-23T21:32:01Z | ---
license: apache-2.0
---
# 🚀 Adapter Usage
```python
# Load the Pre-trained LoRA Adapter
model.load_adapter("shuyuej/prompt_consistency_epoch_2")
model.enable_adapters()
```
|
shuyuej/prompt_consistency_epoch_1 | shuyuej | 2024-01-24T04:45:06Z | 0 | 1 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-01-23T21:26:47Z | ---
license: apache-2.0
---
# 🚀 Adapter Usage
```python
# Load the Pre-trained LoRA Adapter
model.load_adapter("shuyuej/prompt_consistency_epoch_1")
model.enable_adapters()
```
|
LoneStriker/openbuddy-deepseek-10b-v17.1-4k-4.0bpw-h6-exl2 | LoneStriker | 2024-01-24T04:39:27Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"ru",
"fi",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-24T04:36:54Z | ---
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
- fi
pipeline_tag: text-generation
inference: false
library_name: transformers
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-LLM/blob/548a39bdd03986297ea4e233a8b7676edd6bec3e/LICENSE-MODEL
---
# OpenBuddy - Open Multilingual Chatbot
GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy)
Website and Demo: [https://openbuddy.ai](https://openbuddy.ai)
Evaluation result of this model: [Evaluation.txt](Evaluation.txt)

# Copyright Notice
Base model: https://huggingface.co/deepseek-ai/deepseek-llm-7b-base
License: [deepseek](https://github.com/deepseek-ai/DeepSeek-LLM/blob/548a39bdd03986297ea4e233a8b7676edd6bec3e/LICENSE-MODEL)
## Disclaimer
All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions.
OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.
By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy.
## 免责声明
所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。
OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。
使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。 |
tvjoseph/GenerAd-AI | tvjoseph | 2024-01-24T04:38:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-24T04:38:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LoneStriker/openbuddy-deepseek-10b-v17.1-4k-3.0bpw-h6-exl2 | LoneStriker | 2024-01-24T04:36:52Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"ru",
"fi",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-24T04:34:47Z | ---
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
- fi
pipeline_tag: text-generation
inference: false
library_name: transformers
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-LLM/blob/548a39bdd03986297ea4e233a8b7676edd6bec3e/LICENSE-MODEL
---
# OpenBuddy - Open Multilingual Chatbot
GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy)
Website and Demo: [https://openbuddy.ai](https://openbuddy.ai)
Evaluation result of this model: [Evaluation.txt](Evaluation.txt)

# Copyright Notice
Base model: https://huggingface.co/deepseek-ai/deepseek-llm-7b-base
License: [deepseek](https://github.com/deepseek-ai/DeepSeek-LLM/blob/548a39bdd03986297ea4e233a8b7676edd6bec3e/LICENSE-MODEL)
## Disclaimer
All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions.
OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.
By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy.
## 免责声明
所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。
OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。
使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。 |
genne/kiwi_solar_merge_ties2_dpo | genne | 2024-01-24T04:32:35Z | 103 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-15T23:23:32Z | ---
license: apache-2.0
language:
- ko
--- |
Megalino111/PixelCopter | Megalino111 | 2024-01-24T04:28:34Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2024-01-24T04:28:02Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 52.40 +/- 37.72
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
jiudth/ppo-LunarLander-v2 | jiudth | 2024-01-24T04:22:24Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-01-24T04:22:06Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 245.68 +/- 38.40
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
abacusai/MM-OV-bagel-DPO-34b-c1000-250 | abacusai | 2024-01-24T04:17:48Z | 49 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:abacusai/MetaMathFewshot",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-23T22:00:23Z | ---
license: apache-2.0
datasets:
- abacusai/MetaMathFewshot
---
DPO finetune of (https://huggingface.co/abacusai/MM-Orc-Vic-bagel-34b-c1000) on the Bagel DPO dataset
### Evaluation Results
| Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
| --- | --- | --- | --- | --- | --- | --- |
| | | | | | | | |
dsrestrepo/BERT_Lab_Values_10B_no_lab_id_no_repetition | dsrestrepo | 2024-01-24T04:15:00Z | 91 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-01-24T04:09:31Z | # Model Details
#### Model Name: NumericBERT
#### Model Type: Transformer
#### Architecture: BERT
#### Training Method: Masked Language Modeling (MLM)
#### Training Data: MIMIC IV Lab values data
#### Training Hyperparameters:
- **Optimizer:** AdamW
- **Learning Rate:** 5e-5
- **Masking Rate:** 20%
- **Tokenization:** Custom numeric-to-text mapping using the TextEncoder class
### Text Encoding Process
**Overview:** Non-negative integers are converted into uppercase letter-based representations, allowing numerical values to be expressed as sequences of letters.
**Normalization and Binning:**
- **Method:** Log normalization and splitting into 10 bins.
- **Representation:** Each bin is represented by a letter (A-J).
### Token Construction:
- **Format:** `<<lab_value_bin>>`
- **Example:** For a lab value with a normalized value in bin 'C', the token might be `C`.
- **Columns Used:** 'Bic', 'Crt', 'Pot', 'Sod', 'Ure', 'Hgb', 'Plt', 'Wbc'.
### Training Data Preprocessing
- **Column Selection:** Numerical values from selected lab values.
- **Text Encoding:** Numeric values are encoded into text using the process described above.
- **Masking:** 20% of the data is randomly masked during training.
### Model Output
- **Description:** Outputs predictions for masked values during training.
- **Format:** Contains the encoded text representing the predicted lab values.
### Limitations and Considerations
- **Numeric Data Representation:** The custom text representation may have limitations in capturing the intricacies of the original numeric data.
- **Training Data Source:** Performance may be influenced by the characteristics and biases inherent in the MIMIC IV dataset.
- **Generalizability:** The model's effectiveness outside the context of the training dataset is not guaranteed.
### Contact Information
- **Email:** [email protected]
- **Name:** David Restrepo
- **Affiliation:** MIT Critical Data - MIT
|
minchyeom/MemGPT-GGUF | minchyeom | 2024-01-24T04:13:16Z | 5 | 3 | null | [
"gguf",
"memgpt",
"function",
"function calling",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-01-24T02:48:09Z | ---
language:
- en
license: apache-2.0
tags:
- memgpt
- function
- function calling
---
# I recommend using the unquantized model over this as this model performs noticeably worse!
# Model Description
This repo contains a 7 billion parameter Language Model fine tuned from [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2). This model is specifically designed for function calling in [MemGPT](https://memgpt.ai/). It demonstrates comparable performances to GPT-4 when it comes to working with MemGPT.
The original model has been quantized to Q8_0, using [llama.cpp](https://github.com/ggerganov/llama.cpp) for better inference speed ~~(original unquantized model coming soon)~~.
# Key Features
* Function calling
* Dedicated to working with MemGPT
* Supports medium context, trained with Sequences up to 8,192
# Usage
This model is designed to be ran on various backends, such as [oogabooga's WebUI](https://github.com/oobabooga/text-generation-webui), or llama.cpp.
To run the model on WebUI, simply `git clone` the official WebUI repository, and run the appropriate script for your operating system. More details [here](https://github.com/oobabooga/text-generation-webui?tab=readme-ov-file#how-to-install).
Once you've installed WebUI, you can then download this model at the `model` tab. Next, choose the desired model (starsnatched/MemGPT in this case), and you're good to go for the backend.
When you have WebUI or your desired backend running, you can open a terminal/powershell, and install MemGPT using `pip3 install -U pymemgpt`. Configure your MemGPT using `memgpt configure` before running MemGPT.
Use `memgpt run --no-verify` to chat with MemGPT!
# Prompt Template
This model uses a custom chat template. We're naming this template "memgpt". This can be configured via `memgpt configure`.
```
<turn> SYSTEM: You are MemGPT. </turn>
<turn> SYSTEM_ALERT: {'message': 'You have 50 prior messages with the user. These messages have been hidden from view due to memory constraints, but you can view them using functions.', 'time': '2023-10-07 11:50:03 PM PDT-0700'} </turn>
<turn> ASSISTANT: Bootup sequence complete. Persona activated. Testing messaging functionality. </turn>
<turn> FUNCTION CALL: {'name': 'send_message', 'arguments': '{\n ""message"": ""More human than human is our motto.""\n}'}<turn> FUNCTION: {""status"": ""OK"", ""message"": null, ""time"": ""2023-10-07 11:50:03 PM PDT-0700""} </turn>
<turn> LOGIN: {'last_login': '7 days 8 hours ago', 'time': '2023-10-07 11:50:03 PM PDT-0700'} </turn>
<turn> USER_MESSAGE: {'message': 'Hey, remember that time we talked about music? What was the artist you mentioned you could get into?', 'time': '2023-10-07 11:50:03 PM PDT-0700'} </turn>
<turn> ASSISTANT: Alright, let me search our conversation history about music to find the artist. </turn>
<turn> FUNCTION CALL: {'name': 'conversation_search', 'arguments': '{\n ""query"": ""music"",\n ""request_heartbeat"": true,\n ""page"": 0\n}'}<turn> FUNCTION: {""status"": ""OK"", ""message"": ""Showing 5 of 50 results (page 0/9): [\""timestamp: 2023-10-07 11:50:03 PM PDT-0700, user - I agree. Working out is a great way to burn off steam. Do you like country music?\"", \""timestamp: 2023-10-07 11:50:03 PM PDT-0700, assistant - A little bit. I can get into taylor swift.\"", \""timestamp: 2023-10-07 11:50:03 PM PDT-0700, user - Yeah between the both of us, could probably get a country song written about our distresses...perhaps Taylor Swift could perform it.\"", \""timestamp: 2023-10-07 11:50:03 PM PDT-0700, assistant - That would be a chart-topper, I'm sure. Something to blast while driving with the top down, maybe even in your next restoration project.\"", \""timestamp: 2023-10-07 11:50:03 PM PDT-0700, assistant - This sounds fun. Do you have any other hobby?\""]"", ""time"": ""2023-10-07 11:50:08 PM PDT-0700""} </turn>
<turn> HEARTBEAT: {'reason': 'request_heartbeat == true', 'time': '2023-10-07 11:50:08 PM PDT-0700'} </turn>
<turn> ASSISTANT: Ah, I found it. It was Taylor Swift that I mentioned I could get into. Do you still listen to her music, or have you found any new artists that you enjoy? </turn>
<turn> FUNCTION CALL: {'name': 'send_message', 'arguments': '{\n ""message"": ""Ah, I found it. It was Taylor Swift that I mentioned I could get into. Do you still listen to her music, or have you found any new artists that you enjoy?""\n}'}<turn> FUNCTION: {""status"": ""OK"", ""message"": null, ""time"": ""2023-10-07 11:50:14 PM PDT-0700""} </turn>
```
# Model Details
* Developed by: @starsnatched
* Model type: This repo contains a language model based on the transformer decoder architecture.
* Language: English
* Contact: For any questions, concerns or comments about this model, please contact me at Discord, @starsnatched.
# Training Infrastructure
* Hardware: The model in this repo was trained on 2x A100 80GB GPUs.
# Intended Use
The model is designed to be used as the base model for MemGPT agents.
# Limitations and Bias
The model may exhibit unreliable, unsafe, or biased behaviours. Please double check the results this model may produce. |
judith0/classification_INE_v1 | judith0 | 2024-01-24T04:10:50Z | 179 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-01-24T04:08:53Z | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9767441860465116
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2135
- Accuracy: 0.9767
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.96 | 6 | 0.4536 | 0.9186 |
| 0.691 | 1.92 | 12 | 0.2135 | 0.9767 |
| 0.691 | 2.88 | 18 | 0.1602 | 0.9767 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Ont/Marcoroni-13B | Ont | 2024-01-24T04:09:02Z | 24 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:Open-Orca/OpenOrca",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-08T17:08:24Z | ---
license: cc-by-nc-4.0
datasets:
- Open-Orca/OpenOrca
language:
- en
pipeline_tag: text-generation
---
# Marcoroni-13B - Safetensors
A conversion of the original model [AIDC-ai-business/Marcoroni-13B] to safetensors format.
# Marcoroni-13B
# Model Details
* **Trained by**: trained by AIDC AI-Business.
* **Model type:** **Marcoroni-13B** is an auto-regressive language model based on the Llama 2 transformer architecture.
* **Language(s)**: English
* **License for Marcoroni-13B base weights**: Non-Commercial Creative Commons license ([CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/))
# Prompting
## Prompt Template for alpaca style
```
### Instruction:
<prompt> (without the <>)
### Response:
```
# Evaluation Results ([Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard))
| Metric | Value |
|-----------------------|-------|
| Avg. | 65.76 |
| ARC (25-shot) | 62.46 |
| HellaSwag (10-shot) | 83.27 |
| MMLU (5-shot) | 59.63 |
| TruthfulQA (0-shot) | 57.7 |
|
stablediffusionapi/ceshi | stablediffusionapi | 2024-01-24T03:58:44Z | 25 | 1 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-01-24T03:56:45Z | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# ceshi API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "ceshi"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/ceshi)
Model link: [View model](https://modelslab.com/models/ceshi)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "ceshi",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
varun-v-rao/roberta-base-mnli-model3 | varun-v-rao | 2024-01-24T03:51:53Z | 96 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-01-24T03:14:30Z | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-mnli-model3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-mnli-model3
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3532
- Accuracy: 0.8741
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 30
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4041 | 1.0 | 1534 | 0.3568 | 0.8668 |
| 0.3329 | 2.0 | 3068 | 0.3548 | 0.8712 |
| 0.2845 | 3.0 | 4602 | 0.3532 | 0.8741 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Sharathhebbar24/math_gpt2 | Sharathhebbar24 | 2024-01-24T03:45:22Z | 217 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"maths",
"arxiv-math",
"mathgpt2",
"en",
"dataset:ArtifactAI/arxiv-math-instruct-50k",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-14T15:10:58Z | ---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- maths
- arxiv-math
- gpt2
- mathgpt2
datasets:
- ArtifactAI/arxiv-math-instruct-50k
widget:
- text: Which motion is formed by an incident particle?
example_title: Example 1
- text: What type of diffusional modeling is used for diffusion?
example_title: Example 2
---
This model is a finetuned version of ```gpt2``` using ```ArtifactAI/arxiv-math-instruct-50k```
## Model description
GPT-2 is a transformers model pre-trained on a very large corpus of English data in a self-supervised fashion. This
means it was pre-trained on the raw texts only, with no humans labeling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifting one token (word or piece of word) to the right. The model uses a masking mechanism to make sure the
predictions for the token `i` only use the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The model is best at what it was trained for, however, which is generating texts from a
prompt.
### To use this model
```python
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
>>> model_name = "Sharathhebbar24/math_gpt2"
>>> model = AutoModelForCausalLM.from_pretrained(model_name)
>>> tokenizer = AutoTokenizer.from_pretrained(model_name)
>>> def generate_text(prompt):
>>> inputs = tokenizer.encode(prompt, return_tensors='pt')
>>> outputs = model.generate(inputs, max_length=64, pad_token_id=tokenizer.eos_token_id)
>>> generated = tokenizer.decode(outputs[0], skip_special_tokens=True)
>>> return generated[:generated.rfind(".")+1]
>>> prompt = "What structure is classified as a definite lie algebra?"
>>> res = generate_text(prompt)
>>> res
``` |
nameissakthi/my_awesome_model | nameissakthi | 2024-01-24T03:39:14Z | 44 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-01-23T15:19:39Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: nameissakthi/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nameissakthi/my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2493
- Validation Loss: 0.1894
- Train Accuracy: 0.9244
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7810, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2493 | 0.1894 | 0.9244 | 0 |
### Framework versions
- Transformers 4.37.0
- TensorFlow 2.10.0
- Datasets 2.16.1
- Tokenizers 0.15.1
|
hfl/chinese-alpaca-2-13b-gguf | hfl | 2024-01-24T03:33:21Z | 220 | 10 | null | [
"gguf",
"zh",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-11-16T05:18:19Z | ---
license: apache-2.0
language:
- zh
- en
---
# Chinese-Alpaca-2-13B-GGUF
This repository contains the GGUF-v3 models (llama.cpp compatible) for **Chinese-Alpaca-2-13B**.
## Performance
Metric: PPL, lower is better
| Quant | original | imatrix (`-im`) |
|-----|------|------|
| Q2_K | 13.7636 +/- 0.19446 | 20.6803 +/- 0.31594 |
| Q3_K | 9.5388 +/- 0.13078 | 9.1016 +/- 0.12565 |
| Q4_0 | 9.1694 +/- 0.12668 | - |
| Q4_K | 8.6633 +/- 0.11957 | 8.6377 +/- 0.11932 |
| Q5_0 | 8.6745 +/- 0.12020 | - |
| Q5_K | 8.5161 +/- 0.11796 | 8.5210 +/- 0.11803 |
| Q6_K | 8.4943 +/- 0.11759 | 8.5011 +/- 0.11775 |
| Q8_0 | 8.4595 +/- 0.11718 | - |
| F16 | 8.4550 +/- 0.11713 | - |
*The model with `-im` suffix is generated with important matrix, which has generally better performance (not always though).*
## Others
For Hugging Face version, please see: https://huggingface.co/hfl/chinese-alpaca-2-13b
Please refer to [https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/) for more details. |
hfl/chinese-alpaca-2-13b-16k-gguf | hfl | 2024-01-24T03:30:48Z | 162 | 1 | null | [
"gguf",
"zh",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-11-16T06:35:17Z | ---
license: apache-2.0
language:
- zh
- en
---
# Chinese-Alpaca-2-13B-16K-GGUF
This repository contains the GGUF-v3 models (llama.cpp compatible) for **Chinese-Alpaca-2-13B-16K**.
## Performance
Metric: PPL, lower is better
| Quant | original | imatrix (`-im`) |
|-----|------|------|
| Q2_K | 12.7790 +/- 0.17943 | 13.8057 +/- 0.19614 |
| Q3_K | 10.0834 +/- 0.14063 | 9.6355 +/- 0.13483 |
| Q4_0 | 9.7072 +/- 0.13563 | - |
| Q4_K | 9.2864 +/- 0.13001 | 9.2097 +/- 0.12874 |
| Q5_0 | 9.2062 +/- 0.12846 | - |
| Q5_K | 9.0912 +/- 0.12705 | 9.0701 +/- 0.12668 |
| Q6_K | 9.0799 +/- 0.12681 | 9.0558 +/- 0.12653 |
| Q8_0 | 9.0200 +/- 0.12616 | - |
| F16 | 9.0142 +/- 0.12603 | - |
*The model with `-im` suffix is generated with important matrix, which has generally better performance (not always though).*
## Others
For Hugging Face version, please see: https://huggingface.co/hfl/chinese-alpaca-2-13b-16k
Please refer to [https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/) for more details. |
hfl/chinese-llama-2-13b-16k-gguf | hfl | 2024-01-24T03:28:08Z | 148 | 1 | null | [
"gguf",
"zh",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-11-16T06:34:38Z | ---
license: apache-2.0
language:
- zh
- en
---
# Chinese-LLaMA-2-13B-16K-GGUF
This repository contains the GGUF-v3 models (llama.cpp compatible) for **Chinese-LLaMA-2-13B-16K**.
## Performance
Metric: PPL, lower is better
| Quant | original | imatrix (`-im`) |
|-----|------|------|
| Q2_K | 11.8958 +/- 0.20739 | 13.0017 +/- 0.23003 |
| Q3_K | 9.7130 +/- 0.17037 | 9.3443 +/- 0.16582 |
| Q4_0 | 9.2002 +/- 0.16219 | - |
| Q4_K | 9.0055 +/- 0.15918 | 8.9848 +/- 0.15908 |
| Q5_0 | 8.8441 +/- 0.15690 | - |
| Q5_K | 8.8999 +/- 0.15751 | 8.8983 +/- 0.15753 |
| Q6_K | 8.8944 +/- 0.15776 | 8.8833 +/- 0.15760 |
| Q8_0 | 8.8745 +/- 0.15745 | - |
| F16 | 8.8687 +/- 0.15729 | - |
*The model with `-im` suffix is generated with important matrix, which has generally better performance (not always though).*
## Others
For Hugging Face version, please see: https://huggingface.co/hfl/chinese-llama-2-13b-16k
Please refer to [https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/) for more details. |
kodonho/Momo-70b-DPO-mixed | kodonho | 2024-01-24T03:18:51Z | 62 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:moreh/MoMo-72B-lora-1.8.4-DPO",
"base_model:merge:moreh/MoMo-72B-lora-1.8.4-DPO",
"base_model:moreh/MoMo-72B-lora-1.8.6-DPO",
"base_model:merge:moreh/MoMo-72B-lora-1.8.6-DPO",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-17T05:31:31Z | ---
license: mit
tags:
- mergekit
- merge
base_model:
- moreh/MoMo-70B-lora-1.8.6-DPO
- moreh/MoMo-70B-lora-1.8.4-DPO
---
# MoMo-70B-lora-1.8.6-DPO based model with gradient slerp
This is an English mixed Model based on
* [moreh/MoMo-70B-lora-1.8.6-DPO]
gpu code example
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math
## v2 models
model_path = "kodonho/kodonho/Momo-70b-DPO-mixed"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float32, device_map='auto',local_files_only=False, load_in_4bit=True
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2
)
print(tokenizer.decode(generation_output[0]))
prompt = input("please input prompt:")
``` |
hfl/chinese-alpaca-2-7b-64k-gguf | hfl | 2024-01-24T03:03:15Z | 234 | 5 | null | [
"gguf",
"zh",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-12-23T07:12:33Z | ---
license: apache-2.0
language:
- zh
- en
---
# Chinese-Alpaca-2-7B-64K
This repository contains GGUF-v3 version (llama.cpp compatible) of **Chinese-Alpaca-2-7B-64K**, which is tuned on Chinese-Alpaca-2-7B with **YaRN method**.
## Performance
Metric: PPL, lower is better
| Quant | original | imatrix (`-im`) |
|-----|------|------|
| Q2_K | 9.8201 +/- 0.13298 | 10.3057 +/- 0.14197 |
| Q3_K | 8.4435 +/- 0.11467 | 8.3556 +/- 0.11316 |
| Q4_0 | 8.3573 +/- 0.11496 | - |
| Q4_K | 8.0558 +/- 0.10948 | 8.0557 +/- 0.10964 |
| Q5_0 | 8.0220 +/- 0.10954 | - |
| Q5_K | 7.9388 +/- 0.10802 | 7.9440 +/- 0.10815 |
| Q6_K | 7.9267 +/- 0.10792 | 7.9126 +/- 0.10775 |
| Q8_0 | 7.9117 +/- 0.10773 | - |
| F16 | 7.9124 +/- 0.10780 | - |
*The model with `-im` suffix is generated with important matrix, which has generally better performance (not always though).*
## Others
For full model in HuggingFace format, please see: https://huggingface.co/hfl/chinese-alpaca-2-7b-64k
Please refer to [https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/) for more details. |
mistrex/test | mistrex | 2024-01-24T03:02:34Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2024-01-24T03:02:34Z | ---
license: other
license_name: test
license_link: LICENSE
---
|
hfl/chinese-alpaca-2-7b-16k-gguf | hfl | 2024-01-24T03:01:24Z | 196 | 1 | null | [
"gguf",
"zh",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-11-16T06:33:48Z | ---
license: apache-2.0
language:
- zh
- en
---
# Chinese-Alpaca-2-7B-16K-GGUF
This repository contains the GGUF-v3 models (llama.cpp compatible) for **Chinese-Alpaca-2-7B-16K**.
## Performance
Metric: PPL, lower is better
| Quant | original | imatrix (`-im`) |
|-----|------|------|
| Q2_K | 11.8181 +/- 0.16402 | 13.6285 +/- 0.19294 |
| Q3_K | 9.5596 +/- 0.13369 | 9.3748 +/- 0.13108 |
| Q4_0 | 9.6480 +/- 0.13459 | - |
| Q4_K | 8.9622 +/- 0.12507 | 8.9229 +/- 0.12467 |
| Q5_0 | 8.9274 +/- 0.12485 | - |
| Q5_K | 8.8370 +/- 0.12353 | 8.8221 +/- 0.12348 |
| Q6_K | 8.7830 +/- 0.12290 | 8.7695 +/- 0.12260 |
| Q8_0 | 8.7644 +/- 0.12261 | - |
| F16 | 8.7676 +/- 0.12268 | - |
*The model with `-im` suffix is generated with important matrix, which has generally better performance (not always though).*
## Others
For Hugging Face version, please see: https://huggingface.co/hfl/chinese-alpaca-2-7b-16k
Please refer to [https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/) for more details. |
hfl/chinese-alpaca-2-7b-gguf | hfl | 2024-01-24T02:57:41Z | 503 | 15 | null | [
"gguf",
"zh",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-11-16T04:56:58Z | ---
license: apache-2.0
language:
- zh
- en
---
# Chinese-Alpaca-2-7B-GGUF
This repository contains the GGUF-v3 models (llama.cpp compatible) for **Chinese-Alpaca-2-7B**.
## Performance
Metric: PPL, lower is better
| Quant | original | imatrix (`-im`) |
|-----|------|------|
| Q2_K | 10.3705 +/- 0.14109 | 11.7442 +/- 0.16034 |
| Q3_K | 8.8478 +/- 0.12085 | 8.7104 +/- 0.11925 |
| Q4_0 | 8.6418 +/- 0.11876 | - |
| Q4_K | 8.3294 +/- 0.11396 | 8.3034 +/- 0.11391 |
| Q5_0 | 8.3320 +/- 0.11411 | - |
| Q5_K | 8.2361 +/- 0.11298 | 8.2136 +/- 0.11281 |
| Q6_K | 8.1956 +/- 0.11259 | 8.1852 +/- 0.11246 |
| Q8_0 | 8.1784 +/- 0.11232 | - |
| F16 | 8.1799 +/- 0.11243 | - |
*The model with `-im` suffix is generated with important matrix, which has generally better performance (not always though).*
## Others
For Hugging Face version, please see: https://huggingface.co/hfl/chinese-alpaca-2-7b
Please refer to [https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/) for more details. |
hfl/chinese-alpaca-2-1.3b-gguf | hfl | 2024-01-24T02:54:44Z | 412 | 6 | null | [
"gguf",
"zh",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-11-16T05:21:50Z | ---
license: apache-2.0
language:
- zh
- en
---
# Chinese-Alpaca-2-1.3B-GGUF
This repository contains the GGUF-v3 models (llama.cpp compatible) for **Chinese-Alpaca-2-1.3B**.
## Performance
Metric: PPL, lower is better
| Quant | original | imatrix (`-im`) |
|-----|------|------|
| Q2_K | 19.9339 +/- 0.29752 | 18.8935 +/- 0.28558 |
| Q3_K | 17.2487 +/- 0.27668 | 17.2950 +/- 0.27994 |
| Q4_0 | 16.1358 +/- 0.25091 | - |
| Q4_K | 16.4583 +/- 0.26453 | 16.2688 +/- 0.26216 |
| Q4_0 | 15.9068 +/- 0.25545 | - |
| Q5_K | 15.7547 +/- 0.25207 | 16.0190 +/- 0.25782 |
| Q6_K | 15.8166 +/- 0.25359 | 15.7357 +/- 0.25210 |
| Q8_0 | 15.7972 +/- 0.25384 | - |
| F16 | 15.8098 +/- 0.25403 | - |
*The model with `-im` suffix is generated with important matrix, which has generally better performance (not always though).*
## Others
For Hugging Face version, please see: https://huggingface.co/hfl/chinese-alpaca-2-1.3b
Please refer to [https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/) for more details. |
hfl/chinese-llama-2-7b-64k-gguf | hfl | 2024-01-24T02:53:35Z | 208 | 2 | null | [
"gguf",
"zh",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-12-21T05:45:22Z | ---
license: apache-2.0
language:
- zh
- en
---
# Chinese-LLaMA-2-7B-64K
This repository contains GGUF-v3 version (llama.cpp compatible) of **Chinese-LLaMA-2-7B-64K**, which is tuned on Chinese-LLaMA-2-7B with **YaRN method**.
## Performance
Metric: PPL, lower is better
| Quant | original | imatrix (`-im`) |
|-----|------|------|
| Q2_K | 11.5424 +/- 0.24106 | 12.1599 +/- 0.26050 |
| Q3_K | 10.0152 +/- 0.21296 | 9.9269 +/- 0.21335 |
| Q4_0 | 9.7500 +/- 0.20872 | - |
| Q4_K | 9.7687 +/- 0.21133 | 9.7239 +/- 0.20999 |
| Q5_0 | 9.4647 +/- 0.20280 | - |
| Q5_K | 9.6229 +/- 0.20829 | 9.5673 +/- 0.20675 |
| Q6_K | 9.5996 +/- 0.20816 | 9.5753 +/- 0.20734 |
| Q8_0 | 9.4078 +/- 0.20378 | - |
| F16 | 9.5750 +/- 0.20735 | - |
*The model with `-im` suffix is generated with important matrix, which has generally better performance (not always though).*
## Others
For full model in HuggingFace format, please see: https://huggingface.co/hfl/chinese-llama-2-7b-64k
Please refer to [https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/) for more details. |
hfl/chinese-llama-2-7b-16k-gguf | hfl | 2024-01-24T02:52:02Z | 185 | 2 | null | [
"gguf",
"zh",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-11-16T06:23:32Z | ---
license: apache-2.0
language:
- zh
- en
---
# Chinese-LLaMA-2-7B-16K-GGUF
This repository contains the GGUF-v3 models (llama.cpp compatible) for **Chinese-LLaMA-2-7B-16K**.
## Performance
Metric: PPL, lower is better
| Quant | original | imatrix (`-im`) |
|-----|------|------|
| Q2_K | 11.5580 +/- 0.23848 | 12.3757 +/- 0.26048 |
| Q3_K | 9.8263 +/- 0.20663 | 9.7124 +/- 0.20569 |
| Q4_0 | 9.6558 +/- 0.20657 | - |
| Q4_K | 9.5590 +/- 0.20460 | 9.4945 +/- 0.20337 |
| Q5_0 | 9.2767 +/- 0.19835 | - |
| Q5_K | 9.4303 +/- 0.20305 | 9.4275 +/- 0.20291 |
| Q6_K | 9.4046 +/- 0.20272 | 9.4106 +/- 0.20284 |
| Q8_0 | 9.2145 +/- 0.19943 | - |
| F16 | 9.4045 +/- 0.20289 | - |
*The model with `-im` suffix is generated with important matrix, which has generally better performance (not always though).*
## Others
For Hugging Face version, please see: https://huggingface.co/hfl/chinese-llama-2-7b-16k
Please refer to [https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/) for more details. |
dsrestrepo/BERT_Lab_Values_10B_lab_id_repetition | dsrestrepo | 2024-01-24T02:49:22Z | 95 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-01-23T23:54:09Z | # Model Details
##### Model Name: NumericBERT
##### Model Type: Transformer
##### Architecture: BERT
##### Training Method: Masked Language Modeling (MLM)
##### Training Data: MIMIC IV Lab values data
##### Training Hyperparameters:
* Optimizer: AdamW
* Learning Rate: 5e-5
* Masking Rate: 20%
* Tokenization: Custom numeric-to-text mapping using the TextEncoder class
### Text Encoding Process
Overview: Non-negative integers are converted into uppercase letter-based representations, allowing numerical values to be expressed as sequences of letters.
Normalization and Binning:
* Method: Log normalization and splitting into 10 bins.
* Representation: Each bin is represented by a letter (A-J).
### Token Construction:
Format: `<<lab_id_token>> <<lab_id_token>><<lab_value_bin>>`
Example: For a lab value of type 'Bic' with a normalized value in bin 'C', the token might be Bic BicC.
Columns Used: 'Bic', 'Crt', 'Pot', 'Sod', 'Ure', 'Hgb', 'Plt', 'Wbc'.
### Training Data Preprocessing
Column Selection: Numerical values from selected lab values.
Text Encoding: Numeric values are encoded into text using the process described above.
Masking: 20% of the data is randomly masked during training.
### Model Output
Description: Outputs predictions for masked values during training.
Format: Contains the encoded text representing the predicted lab values.
### Limitations and Considerations
Numeric Data Representation: The custom text representation may have limitations in capturing the intricacies of the original numeric data.
Training Data Source: Performance may be influenced by the characteristics and biases inherent in the MIMIC IV dataset.
Generalizability: The model's effectiveness outside the context of the training dataset is not guaranteed.
### Contact Information
Email: [email protected]
David Restrepo
MIT Critical Data - MIT |
varun-v-rao/roberta-base-mnli-model1 | varun-v-rao | 2024-01-24T02:34:15Z | 90 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-01-24T01:56:39Z | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-mnli-model1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-mnli-model1
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3473
- Accuracy: 0.8743
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4049 | 1.0 | 1534 | 0.3517 | 0.8679 |
| 0.3365 | 2.0 | 3068 | 0.3372 | 0.8714 |
| 0.2863 | 3.0 | 4602 | 0.3473 | 0.8743 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
amiller/ppo-LunarLander-v2 | amiller | 2024-01-24T02:17:33Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-01-24T02:17:13Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 255.89 +/- 22.33
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jburtoft/TDXL | jburtoft | 2024-01-24T02:14:17Z | 0 | 0 | null | [
"region:us"
] | null | 2024-01-23T04:45:47Z | To run:
Stand up Inferentia inf2.8xlarge with Hugging Face AMI.
Update to 2.16:
```
sudo apt-get update -y \
&& sudo apt-get install -y --no-install-recommends \
aws-neuronx-dkms=2.15.9.0 \
aws-neuronx-collectives=2.19.7.0-530fb3064 \
aws-neuronx-runtime-lib=2.19.5.0-97e2d271b \
aws-neuronx-tools=2.16.1.0
pip3 install --upgrade \
neuronx-cc==2.12.54.0 \
torch-neuronx==1.13.1.1.13.0 \
transformers-neuronx==0.9.474 \
--extra-index-url=https://pip.repos.neuron.amazonaws.com
python3 -m pip install git+https://github.com/aws-neuron/transformers-neuronx.git
pip install git+https://github.com/huggingface/optimum.git
pip install git+https://github.com/huggingface/optimum-neuron.git
```
Then you should be able to run it right from the repository:
```
from optimum.neuron import NeuronStableDiffusionXLPipeline
pipeline = NeuronStableDiffusionXLPipeline.from_pretrained("jburtoft/TDXL", device_ids=[0, 1])
prompt = "a photo of an astronaut riding a horse on mars"
# generate image
image = pipeline(prompt).images[0]
image.save("generated_image.jpg")
``` |
Sesgaro/picin_adapter | Sesgaro | 2024-01-24T02:08:48Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:ybelkada/falcon-7b-sharded-bf16",
"base_model:adapter:ybelkada/falcon-7b-sharded-bf16",
"region:us"
] | null | 2024-01-24T02:05:16Z | ---
library_name: peft
base_model: ybelkada/falcon-7b-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
Floyd93/Masked_Grammer_Jan_2024 | Floyd93 | 2024-01-24T02:06:39Z | 118 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"camembert",
"fill-mask",
"generated_from_trainer",
"base_model:almanach/camembert-base",
"base_model:finetune:almanach/camembert-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-01-22T03:09:43Z | ---
license: mit
base_model: camembert-base
tags:
- generated_from_trainer
model-index:
- name: Masked_Grammer_Jan_2024
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Masked_Grammer_Jan_2024
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6629
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 289 | 1.6563 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
atsstagram/distilbert-base-uncased-finetuned-emotion-imbalanced-1000plus3000 | atsstagram | 2024-01-24T01:46:45Z | 89 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-01-22T22:32:59Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion-imbalanced-1000plus3000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion-imbalanced-1000plus3000
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1446
- Accuracy: 0.5835
- F1: 0.4990
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.4611 | 1.0 | 63 | 1.2669 | 0.532 | 0.4233 |
| 1.1433 | 2.0 | 126 | 1.1446 | 0.5835 | 0.4990 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.1.0+cu121
- Datasets 1.16.1
- Tokenizers 0.15.0
|
ifuseok/ft-solar-10.7b-v2.1-dpo | ifuseok | 2024-01-24T01:41:14Z | 2,283 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"dataset:nlpai-lab/databricks-dolly-15k-ko",
"dataset:kyujinpy/KOR-OpenOrca-Platypus-v3",
"dataset:KETI-AIR/kor_boolq",
"dataset:heegyu/open-korean-instructions",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-13T16:09:54Z | ---
language:
- ko
pipeline_tag: text-generation
datasets:
- nlpai-lab/databricks-dolly-15k-ko
- kyujinpy/KOR-OpenOrca-Platypus-v3
- KETI-AIR/kor_boolq
- heegyu/open-korean-instructions
license: cc-by-nc-sa-4.0
---
**Input** Models input text only.
**Output** Models generate text only.
**Base Model** [yanolja/KoSOLAR-10.7B-v0.1](https://huggingface.co/yanolja/KoSOLAR-10.7B-v0.1-deprecated)
**Training Dataset**
- [nlpai-lab/databricks-dolly-15k-ko](https://huggingface.co/datasets/nlpai-lab/databricks-dolly-15k-ko)
- [kyujinpy/KOR-OpenOrca-Platypus-v3](https://huggingface.co/datasets/kyujinpy/KOR-OpenOrca-Platypus-v3)
- [heegyu/open-korean-instructions](heegyu/open-korean-instructions)
- [KETI-AIR/kor_boolq](https://huggingface.co/datasets/KETI-AIR/kor_boolq)
- [AIhub 영한 번역 데이터 일부](https://aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&dataSetSn=71593)
# Implementation Code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "ifuseok/sft-solar-10.7b-v2.1-dpo"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```
# Prompt Example
```
### System:
시스템 메시지 입니다.
### User:
유저 입니다.
### Assistant
어시스턴트 입니다.
``` |
gate369/Ingot-7b-slerp-7-forged-mirror | gate369 | 2024-01-24T01:37:25Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"liminerity/Ingot-7b-slerp-6",
"liminerity/Ingot-7b-slerp-5",
"base_model:liminerity/Ingot-7b-slerp-5",
"base_model:merge:liminerity/Ingot-7b-slerp-5",
"base_model:liminerity/Ingot-7b-slerp-6",
"base_model:merge:liminerity/Ingot-7b-slerp-6",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-24T01:29:10Z | ---
tags:
- merge
- mergekit
- lazymergekit
- liminerity/Ingot-7b-slerp-6
- liminerity/Ingot-7b-slerp-5
base_model:
- liminerity/Ingot-7b-slerp-6
- liminerity/Ingot-7b-slerp-5
---
# Ingot-7b-slerp-7-forged-mirror
Ingot-7b-slerp-7-forged-mirror is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [liminerity/Ingot-7b-slerp-6](https://huggingface.co/liminerity/Ingot-7b-slerp-6)
* [liminerity/Ingot-7b-slerp-5](https://huggingface.co/liminerity/Ingot-7b-slerp-5)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: liminerity/Ingot-7b-slerp-6
layer_range: [0, 32]
- model: liminerity/Ingot-7b-slerp-5
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/Ingot-7b-slerp-6
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "liminerity/Ingot-7b-slerp-7-forged-mirror"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Tele-AI/telechat-7B-int4 | Tele-AI | 2024-01-24T01:31:13Z | 24 | 76 | transformers | [
"transformers",
"telechat",
"text-generation",
"custom_code",
"arxiv:2401.03804",
"arxiv:2104.09864",
"arxiv:2002.05202",
"arxiv:1910.07467",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-01-08T07:49:11Z | ---
license: apache-2.0
---
<div align="center">
<h1>
星辰语义大模型-TeleChat
</h1>
</div>
<p align="center">
🤗 <a href="https://huggingface.co/Tele-AI/Telechat-7B" target="_blank">Hugging Face</a> • 🏔 <a href="" target="_blank">MindSpore</a>️ • 🦉 <a href="https://github.com/Tele-AI/Telechat" target="_blank">github</a>️ • 🐾 <a href="https://gitee.com/Tele-AI/tele-chat" target="_blank">gitee</a>️ • 💬 <a href="https://github.com/Tele-AI/Telechat/blob/master/images/wechat.jpg" target="_blank">WeChat</a>
</p>
<p align="center">
<a href="https://arxiv.org/abs/2401.03804" target="_blank"> Tech Report </a>
</p>
# 目录
- [模型介绍](#模型介绍)
- [数据开源](#数据开源)
- [效果评测](#效果评测)
- [模型推理和部署](#模型推理和部署)
- [模型微调](#模型微调)
- [模型量化](#模型量化)
- [国产GPU适配](#国产GPU适配)
- [声明、协议、引用](#声明协议引用)
# 最新动态
- 2024.1月底开源12B版本模型(待开放)
- 2024.1.11 开源1T中文数据集
- 2024.1.10 开源7B版本chat模型及其量化版本
# 模型介绍
### 星辰语义大模型-TeleChat
- 星辰语义大模型TeleChat是由中电信人工智能科技有限公司研发训练的大语言模型,采用1.5万亿 Tokens中英文高质量语料进行训练。
- 本次开源了对话模型**TeleChat-7B-bot**,以及其`huggingface`格式的权重文件。此外,我们还开源了7B模型的int8和int4量化版本。
### 模型结构
我们采用标准的 `Decoder-only` 结构设计了 **TeleChat** 模型,并在模型维度做了如下的一些改进:
- **位置编码**:我们使用 [Rotary Embedding](https://arxiv.org/pdf/2104.09864.pdf) 的位置编码方法,该方法将相对位置信息依赖集成到 self-attention 中,并且具有较好的位置外推性。Rotary Embedding还可以较好地与Flash-Attention v2 配合使用,将模型的训练速度提升约20%。
- **激活函数**:我们使用 [SwiGLU](https://arxiv.org/pdf/2002.05202.pdf) 激活函数来替代GELU激活函数 , 为了减少计算量,将`ffn_hidden_size`设置为小于原始SwiGLU中的4倍隐藏层大小。
- **层标准化**: 基于 [RMSNorm](https://arxiv.org/abs/1910.07467) 的 Pre-Normalization。
| | layer_num | hidden_size | ffn_hidden_size | head_num | 是否使用embed-layernorm |
|-----| --------- | ----------- | --------------- | -------- | ----------------------- |
| 7B | 30 | 4096 | 12288 | 32 | 否 |
| 12B | 38 | 5120 | 12288 | 32 | 否 |
---
我们开源的TeleChat模型:
- 支持deepspeed微调,开源了基于deepspeed的训练代码,支持Zero并行显存优化,同时集成了FlashAttention2
- 多轮能力支持。开源了多轮数据构建方式,针对多轮模型训练集成了针对多轮的mask loss训练方式,更好的聚焦多轮答案,提升问答效果。
- 外推能力提升。开源了8K训练版本模型,采用NTK-aware外推和attention scaling外推方式,可以外推到96K。
- 具备较好的长文生成能力。在工作总结、工作计划、PPT大纲、申论、招标书、邮件、方案、周报、JD写作等长文写作任务重具有较好的表现。
本次发布版本和下载链接见下表
| 模型版本 | 下载链接 |
|---------| ----------------- |
| 7B-FP16 | [TeleChat-FP16](https://huggingface.co/Tele-AI/Telechat-7B) |
| 7B-int8 | [TeleChat-int8](https://huggingface.co/Tele-AI/Telechat-7B-int8) |
| 7B-int4 | [TeleChat-int4](https://huggingface.co/Tele-AI/Telechat-7B-int4) |
**镜像下载**
为了便于大家快速上手,我们提供了可运行的环境镜像,下载地址:[镜像下载](https://cloud.189.cn/t/EbAriaQfa2mm) (访问码:2uik)
# 数据开源
### 数据介绍
TeleChat-PTD 是由电信星辰大模型**TeleChat**预训练语料中抽取出的的综合性大规模中文数据集。数据主要来源于网页、书籍、官方媒体等。 我们使用规则+模型的方式进行了相关的过滤,并对数据进行了相似性去重,尽可能地提取出高质量地数据。
TeleChat-PTD 数据集大约公开了2.7亿条数据,数据由纯中文文本构成构成,原始大小约1TB,压缩后480G,共189个文件。数据集中已经去除了其它冗余信息。
### 数据下载
huggingface下载地址:TODO
天翼云盘下载地址:TODO
# 效果评测
TeleChat模型相比同规模模型在评测效果方面也有较好的表现,我们的评测集涵盖了包括MMLU、C-Eval、GAOKAO、AGIEval、CMMLU、 GSM8K、MATH、HumanEval、CHID等数据集,评测能力包括了自然语言理解、知识、数学计算和推理、代码生成等
## 评测结果如下
| Model | MMLU | C-Eval | CMMLU | AGIEval | GAOKAO | GSM8K | MATH | HumanEval | CSL | CHID | EPRSTMT |
|:--------------------|:--------:|:--------:|:------:|:--------:|:------: |:-------:|:--------:|:----------:|:-----:|:----:|:-------:|
| | 5-shot | 5-shot | 5-shot | zero-shot | zero-shot|4-shot | 4-shot |zero-shot| zero-shot | zero-shot |zero-shot |
| LLaMA2-7B-chat | 46.2 | 31.9 | 31.5 | 28.5 | 16.1 | 26.3 | 3.9 | 12.2 | 58.8 | 44.1 | 57.5 |
| LLaMA2-13B-chat | 54.6 | 36.2 | 38.7 | 32.3 | 18.6 | 29.6 | 5.0 | 18.9 | 61.2 | 48 | 59.4 |
| ChatGLM2-6B-chat | 45.9 | 52.6 | 49.3 | 39 | 46.4 | 28.8 | 6.5 | 11 | 61.2 | 57.9 | 71.2 |
| ChatGLM3-6B-chat | 51.9 | 53.8 | 54 | 38.9 | 49.3 | 56.7 | 18.7 | 61 | 65.6 | 63.4 | 85 |
| InternLM-7B-chat | 52 | 54.1 | 52.6 | 43.7 | 45.8 | 34.6 | 5.6 | 12.8 | 70 | 79.7 | 88.8 |
| Baichuan2-7B-chat | 52.8 | 55.6 | 54 | 35.3 | 39.7 | 32.8 | 6 | 13.4 | 60 | 75.2 | 87.5 |
| Baichuan2-13B-chat | 57 | 56.7 | 58.4 | 40 | 51.4 | 55.3 | 8.6 | 17.7 | 63.1 | 78.2 | 87.5 |
| Qwen-7B-chat | 56.6 | 59.3 | 59.5 | 41.3 | 63.3 | 52.5 | 10.3 | 26.2 | 63.1 | 72.3 | 88.8 |
| Qwen-14B-chat | 66.4 | 71.7 | 70.0 | 47.3 | 76.5 | 61 | 26.8 | 36.6 | 55.6 | 72.3 | 91.2 |
| TeleChat-7B-chat | 54.4 | 62.1 | 64.3 | 46.8 | 57.7 | 36.7 | 10.3 | 14.6 | 66.81 | 88.0 | 87.5 |
说明:CMMLU、AGIEval、GAOKAO、CSL、CHID、EPRSTMT均基于[OpenCompass](https://github.com/open-compass/OpenCompass/)平台提供的评测方法进行评估,而对于对比模型,我们同时参考了官方汇报结果和OpenCompass结果。我们使用了自己的评测脚本评测MMLU与CEVAL榜单,具体方法见`evaluation/`文件夹。
# 模型推理
## 4bit量化模型推理
```python
>>> from transformers import AutoTokenizer, GenerationConfig
>>> from modeling_telechat_gptq import TelechatGPTQForCausalLM
>>> PATH = '../models/7B_4bit'
>>> tokenizer = AutoTokenizer.from_pretrained(PATH, trust_remote_code=True)
>>> model = TelechatGPTQForCausalLM.from_quantized(PATH, device="cuda:0", inject_fused_mlp=False, inject_fused_attention=False, trust_remote_code=True)
>>> generate_config = GenerationConfig.from_pretrained(PATH)
>>> model.eval()
>>> question = "生抽与老抽的区别?"
>>> answer, history = model.chat(tokenizer=tokenizer, question=question, history=[], generation_config=generate_config, stream=False)
>>> print("回答:", answer)
回答: 生抽和老抽是两种不同的酱油,它们的区别主要体现在以下几个方面:
1. 原料不同:生抽是用大豆、小麦等制成的,而老抽则是用豆豉、盐等制成的。
2. 发酵方式不同:生抽是通过将大豆或小麦浸泡在水中,然后进行发酵制成的;而老抽则是在制作过程中直接将大豆或小麦炒熟后使用。
3. 味道不同:生抽的口感比较鲜美,有咸味和甜味;老抽的味道相对较重,有咸味和苦味。
4. 用途不同:生抽主要用于调味酱料、腌制肉类等;老抽则主要用于烹调菜肴、焖煮食材等。
```
# 声明、协议、引用
### 声明
我们在此声明,不要使用TeleChat模型及其衍生模型进行任何危害国家社会安全或违法的活动。同时,我们也要求使用者不要将TeleChat模型用于没有安全审查和备案的互联网服务。我们希望所有使用者遵守上述原则,确保科技发展在合法合规的环境下进行。
我们已经尽我们所能,来确保模型训练过程中使用的数据的合规性。然而,尽管我们已经做出了巨大的努力,但由于模型和数据的复杂性,仍有可能存在一些无法预见的问题。因此,如果由于使用TeleChat开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。
### 协议
社区使用 TeleChat 模型需要遵循《[TeleChat模型社区许可协议](./TeleChat模型社区许可协议.pdf)》。TeleChat模型支持商业用途,如果您计划将 TeleChat 模型或其衍生品用于商业目的,您需要通过以下联系邮箱 [email protected],提交《TeleChat模型社区许可协议》要求的申请材料。审核通过后,将特此授予您一个非排他性、全球性、不可转让、不可再许可、可撤销的商用版权许可。
### 引用
如需引用我们的工作,请使用如下 reference:
```
@misc{wang2024telechat,
title={TeleChat Technical Report},
author={Zihan Wang and Xinzhang Liu and Shixuan Liu and Yitong Yao and Yuyao Huang and Zhongjiang He and Xuelong Li and Yongxiang Li and Zhonghao Che and Zhaoxi Zhang and Yan Wang and Xin Wang and Luwen Pu and Huihan Xu and Ruiyu Fang and Yu Zhao and Jie Zhang and Xiaomeng Huang and Zhilong Lu and Jiaxin Peng and Wenjun Zheng and Shiquan Wang and Bingkai Yang and Xuewei he and Zhuoru Jiang and Qiyi Xie and Yanhan Zhang and Zhongqiu Li and Lingling Shi and Weiwei Fu and Yin Zhang and Zilu Huang and Sishi Xiong and Yuxiang Zhang and Chao Wang and Shuangyong Song},
year={2024},
eprint={2401.03804},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
bartowski/speechless-zephyr-code-functionary-7b-exl2 | bartowski | 2024-01-24T01:26:26Z | 1 | 0 | transformers | [
"transformers",
"text-generation",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-24T01:10:51Z | ---
language:
- en
library_name: transformers
pipeline_tag: text-generation
license: apache-2.0
quantized_by: bartowski
---
## Exllama v2 Quantizations of speechless-zephyr-code-functionary-7b
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.12">turboderp's ExLlamaV2 v0.0.12</a> for quantization.
# The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/uukuguy/speechless-zephyr-code-functionary-7b
| Branch | Bits | lm_head bits | Size | Description |
| ----- | ---- | ------- | ------ | ------------ |
| [8_0](https://huggingface.co/Bartowski/speechless-zephyr-code-functionary-7b-exl2/tree/8_0) | 8.0 | 8.0 | 9.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/Bartowski/speechless-zephyr-code-functionary-7b-exl2/tree/6_5) | 6.5 | 8.0 | 8.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/Bartowski/speechless-zephyr-code-functionary-7b-exl2/tree/5_0) | 5.0 | 6.0 | 7.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
| [4_25](https://huggingface.co/Bartowski/speechless-zephyr-code-functionary-7b-exl2/tree/4_25) | 4.25 | 6.0 | 6.7 GB | GPTQ equivalent bits per weight, slightly higher quality. |
| [3_5](https://huggingface.co/Bartowski/speechless-zephyr-code-functionary-7b-exl2/tree/3_5) | 3.5 | 6.0 | 6.1 GB | Lower quality, only use if you have to. |
All VRAM requirements estimated from 16k context. For 32k context add ~2 GB.
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/speechless-zephyr-code-functionary-7b-exl2 speechless-zephyr-code-functionary-7b-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `speechless-zephyr-code-functionary-7b-exl2`:
```shell
mkdir speechless-zephyr-code-functionary-7b-exl2
huggingface-cli download bartowski/speechless-zephyr-code-functionary-7b-exl2 --local-dir speechless-zephyr-code-functionary-7b-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
Linux:
```shell
mkdir speechless-zephyr-code-functionary-7b-exl2-6_5
huggingface-cli download bartowski/speechless-zephyr-code-functionary-7b-exl2 --revision 6_5 --local-dir speechless-zephyr-code-functionary-7b-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
mkdir speechless-zephyr-code-functionary-7b-exl2-6.5
huggingface-cli download bartowski/speechless-zephyr-code-functionary-7b-exl2 --revision 6_5 --local-dir speechless-zephyr-code-functionary-7b-exl2-6.5 --local-dir-use-symlinks False
``` |
ntc-ai/SDXL-LoRA-slider.back-to-the-future-film-still | ntc-ai | 2024-01-24T01:26:09Z | 23 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] | text-to-image | 2024-01-24T01:26:06Z |
---
language:
- en
thumbnail: "images/evaluate/back to the future film still.../back to the future film still_17_3.0.png"
widget:
- text: back to the future film still
output:
url: images/back to the future film still_17_3.0.png
- text: back to the future film still
output:
url: images/back to the future film still_19_3.0.png
- text: back to the future film still
output:
url: images/back to the future film still_20_3.0.png
- text: back to the future film still
output:
url: images/back to the future film still_21_3.0.png
- text: back to the future film still
output:
url: images/back to the future film still_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "back to the future film still"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - back to the future film still (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/back to the future film still_17_-3.0.png" width=256 height=256 /> | <img src="images/back to the future film still_17_0.0.png" width=256 height=256 /> | <img src="images/back to the future film still_17_3.0.png" width=256 height=256 /> |
| <img src="images/back to the future film still_19_-3.0.png" width=256 height=256 /> | <img src="images/back to the future film still_19_0.0.png" width=256 height=256 /> | <img src="images/back to the future film still_19_3.0.png" width=256 height=256 /> |
| <img src="images/back to the future film still_20_-3.0.png" width=256 height=256 /> | <img src="images/back to the future film still_20_0.0.png" width=256 height=256 /> | <img src="images/back to the future film still_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
back to the future film still
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.back-to-the-future-film-still', weight_name='back to the future film still.safetensors', adapter_name="back to the future film still")
# Activate the LoRA
pipe.set_adapters(["back to the future film still"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, back to the future film still"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 1140+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
Bellksly/Oddlands | Bellksly | 2024-01-24T01:24:12Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2024-01-24T01:22:01Z | ---
license: mit
---
in our world we exist but parallela to us is oddlands a empty untouched reality
It was formed many years ago when space giants (the only living thing at the time) created with the help of max (il talk about him later) in the beginning there
Were 6 space giants with the ability to control reality arcav zapp squat lonin forquin and alit they team use the combined power to create the universe but alit tried to sabotage them for this he was forced billions of lightyears into the white space outside the universe when he returns he uses his powers to attempt destroying the world but he is quickly beat by zap
And zache
Other than that lets start at the beginning
When oddlands was formed a meteor struck it
Ths meteor formed the first life on oddlands the alindys a human but shorter and faster with a tall head and longer chin they eventually became humans and zapp would travel around the world killing humans but was eventually absorbed
Over time kilot (oddlands) was separated into two groups mortals and monsters they fought for land and so the first super soldiers were made they experimented on 147 people 14 of which were pregnant the experiment took place at the crater incase something went wrong it would be easy to kill them things went well at first they became powerful killing monsters at an insane rate but soon they became addicted to the smell of fresh blood and when there were not enough monsters to go around they turned to the next best victim killing humans even faster the were locked away by a magical force and slowly became unrecognizable killing all life in the small section of land the remaining monsters soon repopulated and they signed a truce with mortals to live in peace but prejudice still existed and monster became a slur so this wasn't the end
On a nearby island the humans fought off skeletons in the skeleton war until skelly met emit and they became friends and ended the war after this oldWorldTown also known as old city
immediately began starting wars with mainland the first time they won the second there troops were destroyed and a truce was signed off
A couple hundred years later jack is born and the nothingness takes over after his plotline the world is struck by alit who demolished 95% of the life on oddlands including plants when he's finally defeated he's trapped in a disc hopeful never to be seen again but we all know that's not what will happen
After the fallout jacks sword zoro the nothingness and noah are found and the sword is broken releasing a version of jack and the nothingness/noah noah plans to finally end the world but is beat by elyt blank finally ending jacks life and the nothingness elyt uses the sword of cores to win the sword of cores was located dead center of the crater and it was a sacred item of power
It was created by arcav and contains the abilities of all the og giants with the sword now freed from its prison alit was begging to find a way out
This is when trenten a 14 year old boy was working on his school paper he traveled to xit the prison in the void when they met alit use all the power he had remaining in his body to attempt killing the boy instead he was absorbed as he had hoped alit use the boy to steal the sword and take over the planet he told him he was the good guy and they would rule the universe together but at a certain point it became to much and alit was destroyed by none other than frosty himself
In a blizzard of hate and snow and the world reverted to normal
here are some more events
During the age of the nothingness around 50 years before more accurately an underground laboratory known as the pother fired of a series of vibrations ripping holes in the universe this is eventually what caused plasma to become a famous hero out of theses gaps 4 creatures formed the ragno which is literally a giant sentient volcano dinosaur 1 a gas like creature that knocked people around it out and drained there souls shadow eater a purple eyed humanoid who grew each time it consumed a life and the battle bots tiny robots that could fuse together and form demonic creatures
Plasma made a deal with the shadow knight skid to become a powerful shadow bender he use his abilities to kill the creatures perishing fatal however at the hands of shadow eater
meanwhile in motrom a girl named lexa was born
Years ago jordan fritz bought motrom he didn't know about the town's history of criminals
But when he figured out he use his powers to create 10 gems
The fire stone
The earth stone
The water stone
The light stone
The portal stone
And the five stronger ones
The night stone
The death stone
The mystic stone
The building stone
And the life stone
The stones were to given to people who became superheroes
https://docs.google.com/spreadsheets/d/11PLD4NGZhkbmeSgUz2IZqOgxr0csoKxAntTvlNasuDw/edit#gid=0
Years later they all retired and there gems merged into the infinite stone
Which a young boy named maxs bought for his rock collection
Using it he created more stones for the old heroes who were brought back to youth
Max grew up and he got stronger with his powers but after
His parents got in a fight his mother drove off and got in a wreck his father disappeared after that his sister was adopted and he slowly went crazy
All of that drove him insane after all was lost he created giant bubble were every inside was mind controlled to act neighborly and friendly he started to think it was the real world he loved it much until lexa came she was born within his magic mind controlled city but from a young age she was quite defiant one day disastour struck and she was hit with bolt of frozen elctricty giveing her some unique abilities she could slow down time rip apart reality to her likeing create ice and shoot electricty after learning to use her powers she began to udnderstand how bad life was inside a bubble where everyone was fake so one day she attacked max wakeing him from his false reality she began tearing apart the city whilst fighting him he was much much stronger due to haveing 10 stones of ability but she was tricky and in the end she abandoned the bubbleed town he followed her out not realizieng that it immidiatly destroyed the town and worse the stones lose there ability outside of the general area of joran fritzs but hes dead so its his corpse max immideiatly fell to the ground and lexa returned freeing the village people and the heros stones were retrived and burried with jordan other than maxes infinite stone because she couldent find him the assumption was he died when he fell but he didnt it it just so happened to be new years eve when he hit the ground he was thrust into W0 where he immidieatly got back up fulll of ange rhe became the 2nd person to ever escape from W0 before midnight he litterally tore a hole in reality by screaming a set of vibrations when he returned he wasnt in his time the only way to return to your time is the gate that closes at midnight the next day instead he found himself at the begining of time with nothing but a stone he needed help to so useing the stone he created some friends
arcav zapp squat lonin forquin and alit the original space giants but actually he made 7
arcav got the fire stone zapp the light stone squat the water stone forquin the life stone alit the death stone lonin received the a custom stone of relationships
and a unknown giant known as bazil the galaxi queen she took the rest of the stones and made the other 987 space giants that came after the world was made she also made the other 200 mini giants that roamed the earth before handing the stones to alit and becoming dormant alit use the stones with the help of the other giants to create 3 objects the core a object that can grant wishes the sword which they put deep within a meteor and fired at woodlands and finally merged it back into one stone again they gave it to max who was instantly attacked as he attempted to bring the world into existence using everyones powers by alit alit took the stone but they grabbed it and use it first creating the rest of the universe and shooting alit deep into the white space with the stone this killed max once and for all alit kept the stone until he returned to the universe using trenten when he was finally killed the stone was returned to past jordan fritz who absorbed its powers to continue the timelines loop
more characters? okay
as you know when the crater hit the experimented there the pregnant woman's children were born and raised in society in a town called Normville where powers and magic and oddness was banned as a revolt against the wars it caused the people in this town were quite frankly emotionless and bland performing the same routine everyday till they died except the 14 children who soon developed emotions and abilities they became known as the 12 after one died and one refused to join them they lived for eternity protecting righteous
eventually it was narrowed down to 10 during a war against the floridian empire
the floridian empire is a monarchy from a void realm that ripped into oddlands and obliterated everything they were silver humanoids that could morph and rode boats in space \when they crashed into oddlands they depleted the population by 45% of the population leaving 10 rings
the ring of elements was a silver ring with a red blue green and white dot it allowed its user to bend elements
the ring of mentality was a purple gem with a spider leg grip it aloud users to read minds and trap people in their dreams
the ring of knowledge was a silver ring with a green gem in the center it allowed the user to pass on info by touching objects
the ring of life was golden decorated with a marble flower in the center it allowed the user to bring things to life
the ring of light was a simple metal ring with a tiny light bulb in the center it allowed its user to bring light to it and send light elsewhere
the ring of death was a black ring made of two snakes crossing path there heads meet as a purple gem it allowed its user to kill things on contact
the ring of time is a tiny watch it allows the user the ability to speed up slow down or freeze time as well as time travel
the ring of reality is a yellow star and could allow you to slow reality summon things telekinesis and other things
the ring of perception was a hypno circle attached to a black ring it allows you to summon illusions
and the tenth and most powerful ring was the ring of wishing it allows you to grant wishes for yourself or others
theses rings were discovered they were immediately stolen and spread around the world causing the war to break out between mainland and old city
but before that war broke out someone by the name of 6-7 stole them and used the time traveling properties of them to go to the future learn everything about the war and return to capitalize off of it quickly becoming the most powerful politician 6-7 use his future intel to monopolize off tiny eventa and exploit war soon he became power hungry and decided to gain more power he needed to steal another ring so he slaughtered the owners of every ring till he had 7 out of the ten the other three were the ring of elements the ring of death and the ring of light they were gathered by rowen a drifter mazo a winged human and raylee a cyborg girl with a robot arm raylee took the element ring and use it on her cyborg arm to absorb elements and fire them mazo took the ring of light allowing him to blind predators
and obviously rowen took the ring of death however this specific ring was diffrent it wouldent let it be removed from its user leaveing rowen depressed for the instant he neared anythiing it perished he moved to Mount hayville near the middle of oddlands reaching insane heights the top is baically space you lose most gravity and have little oxygen he hid near the center to avoid killing anyone and when he arrived at the area he wanted to move to he discovered a cave after entering hw found alits core and when he got near the ring took over his mind unleashing a hellish wish until the world a virus of rot began to spread crawl ing through the soil eating all life moveing at a steady 32 miles and hour in each direction consumeing every bit of life and everything in its pathat this time 6-7 was unaware as in his trips to the future he had never seen the virus so he had no knoledge of it and began searching for the death ring useing a magic gps he traced it to the mountain after climbing a few hundred miles he noticed a dark oiily layer climbing down the mountain at an alarming rate this was the so called virus the death virus caused rowens skins to peel and his flesh to mutilate till he was nothing but a skellition covered in chunks of rotten flesh and skin he was also trapped within the cave and began going insane the virus slid down to 6-7 instantly dissovleing his hand he use his other hand to slit the rope and fall to the pass below he ran down the mountain the oily substance constantly behind until he had an idea he reched for the reality ring but slipped dropping it into the icy pass so on instinct he leaped off the mountain after it surviveing only by grabbing the ring seconds before hitting the ground he teleported to the surface and began contacting authorites meanwhile rowens new insane variant was ripping through the stone walls when he escaped the cave he discovered that a ring had been dropped the ring of life rowen picked it up and immideatly began regenerateing he was no longer human all he was was a shell a husk whose only goal was mass death and suffering the crature slid the ring onto his other hand which was covered safely by a glove and began its decent down the mountain through its goo of detath when it arrived at the bottome authorites were waiting guns pointed high rowen simply grimaced and lifted his hand up and each one was immideatly killed as they hit the oily floor the were covered in a dark substance loseing there humanity and becomeing souless entitys the virus was proclaimed the eternal death aka TED and so rowens new form was named TEDDY ted spread across oddlands killings thousands eventually 6-7 decided to step in as most of mainland had been killed he use the remaing rings to summon a powerfull blast but it didnt work teddy nearly killed him but he escaped in time and decided to collect the other two rings and return he found mazo and tried to reason but he refused so 6-7 attempted to kill him he was blinded and had his rings stolen mazo took them and wore them at once luckily for 6-7 he still had the time ring though mazo decided if he could collect them he could prtoect the world from there harmful properties so he met up with raylee who easily agreed as long as she could helpwhen they made it to ted he had already killed thousands mazo tried to reason but immideatly was attacked raylee on the other hand turned to mazo and shot him with a ball of helium trapping him inside a floating bubble she leaped up grabbing the bag of rings and equipping all of them in her bionic arm she flung her hadn in the air and the oily liquid ebegan floating up as she ripped the ring of life off his gloved hand and into her bionic arm the ring of death hover was gone it had became rowen and rowen it the beast howled and shot oil towards her which she quicklly dodged ending the fight when she leaped into the darkness fireing a blast of life into it the death consumed her as it reformed into rowen again leaveing only the ring of light in the sky with mazo the ring of life on the ground next to the ring of death the ring of time with 6-7 and the one ring that survived the death the ring of memory these rings were collected by rowen when he returned to the surface other than the time one obviously which was split into 3 versions one allowed future travel one allowed past travel and one allowed frreezing of time the remaing one made it actually 4 versions the remaing one effected the speed of time these were given to rewind pause and fastforward his nephews for a reason that only he knew as he seen it in the future and he kept the remaining one for himself which he use to train for and impossible amount of time before returning to reclaim the remaining rings he found mazo and instantly killed him stealing the rings and equipping the final ones he took every ring up the mountain fused them together in a blast that killed him leaveing one remaing ring known as the ring of life and death which actually granted you the abilitys of memory time life death and light 76 years later rewind and fastforward who can now live forever due to there rings opened a tear in time releaseing a monster from a time after time this will be one of the only reemaing forms of life after reality ends its known as the raptor its awgiant black worm with sharp teeth many other beings entered from the tear and crowed the planet with violence pause tried to use his ability but things from other times wouldent freeze so instead he was torn apart dieing instantly and unleasehing a powerfull blast which snapped the tear shut rewind tried thousands of times to save him by rewinding but couldent and fastforward went to the future but found himself lost in other timelines until eventually he ended up way back in the day he was born he had gained the knowledge of everything ever and told his uncle about the rings and how he needed to continue the loop then fastforward met up with rewind and they opened a gate to a time when pause existed and kidnapepd him then all together useing fastforwards infinite knowledge the performed and ultimate reset sending everything back to its time and reseting the universe this time they werent born so the rift never opened there identys were forced out of the universe and sent into a seperate one oddlands is a planet in a galxy known as pilot pilots a galaxy in a universe known as Maxot named after max obviously the universe however is part of a infinite set of variants known as WEB1(wow you found the kode dm me truck for some quality reward's)maxot and everything else inside WEB1 is sentient so it can force people into other parts of it or other webs other webs allthough there are only 6 are usually not the best place to be for instance WEB 0 is an endless void whose universes contained arent sentient No web is similar to another performing and ultimate reset doesnt effect other webs but it does effect everything in the web it accurs in speaking of time theres also a name for that each second in time is known as a biollop and exist soley within your web timetraveling in between webs can happen but it will only send you where you were at the time not the universe around you its almost impossible to leave your web unless your kicked out by the universe but thats rare so where did rewind pause and fastforward end up and wheres the white space if its not inbetween webs what is inbetween webs well to awnser question one they were forced into one body known as TimeWind the awnser to the second question the whitespace is inbetween sections of the web so you can end up in other versions of reality from there inbetween webs theres creatures known a livintos which are like space giants exept only 5 there called universe giants there names are 1 2 3 4 and 5 1 can create life 2 can create death 3 can create time 4 can create reality and 5 can create power this whole dramatic mess ended a couple years after rewind pause and fastforward should have been born most history played out the same until the rings were found rowen became ted and was killed by raylee but instead 6-7 kept the ring of time and killed mazo to get the ring of life and death fuseing it with his time ring he created the ring reality with the same abilitys as the ring of life and death plus the ring of time he hid the ring in a chest in the mountain where the wish was made and jumped off a tower useing some ability to send himself to a void where time didnt exist and he was alone he wasnt seen again over time oddlands was taken over by a strict goverment were magic ceased to exist until libbie rose was born she went on a casual hike but ended up high in the mountain where the wish orb was she discovered the ring of reality and experinced all of history from the memory area she then use her ability to take down the goverement before finnally smashing the ring in doing so she was sent to a pocket diminsion of eternal hell this also caused a chain of events that went like this the goverment crashed the leaders son ran away the leaders son fell into a wormhole the leaders son woke up in the white space with no memory he decided to go by 6 after meeting 5 who was visiting the web he lived in 6 became an honorary universe giant and eventually gained the ability to create perception this new universe giant kind of just kinda broke physics smashing the alternate realitys from each web into each other and eventually webs started smashing into each other createing a huge mess of realitys this time was known as the down fall and wasnt really a great time for anything the universe giants ceased to exist and the space giants barely kept the realitys together eventually when all was said and doen the univeses and diminsions and webs existed as one unescapeable reality luckily there wasnt much overlap the new universe was named Kirty and was sentient it bassicaly ate itself till every variant copy of a liveing thing was gone and each web had been shrunken into a planet this led to alien travel causing oddlands to gain visitors and increase diversity but other than that not much changed the space giants became the most powerful creatures again and life continued during this the however timewind was reborn and 6-7 re-appeared in oddlands they both met and became leaders splitting the planet in two but soon they died and new leaders were elected eventually things become normal again
nothing however lasts forever eventually oddlands came to an end when the creator fired off a blow of electrical patterns which caused the magic borders to break unleashing the other pups and others into the world not only this but humans and mushrooms alike began mutating into strange creatures until oddlands was known as PitchCork a kingdom ran on strangeness and anarchy it actually stayed like this for 1000s of years before eventually everyone had died off or left it
much more happened during the timeline for instance way back when jack was born he actually had a sister,jill
eventualy she changed her name to Jay to avoid being made fun of for "jack and jill" when there parents died jack moved to krogam with there grandmother but she was an adult so she instead headed west past the tower and into the land of bob the land of bob has some unique properties its a forest only a couple miles in each direction but its sentient and is a weak point in reality causeing pysics to break in some areas its also known for being a pockekt dimnision at any moment you can be sent to and infinite forest diminsion filled with monsters this is actually where forestlings and mushroom folk come from any way as she delved deeper until she found herself in the pocket diminsion she stumbled through the endlless forest not knowing the trouble shed gotten herself into the deeper she went the more sentient the forest became and it began to grow fond of her for every where she went she spread joy at least in the forests eyes from her perspective she was trying to survive "feeding" birds was just her failing to lure them in (read jay and Jacks life's for the finished life story ) dureing the age of anarchy the heros rya hart (hartbreaker) zexkon (zexi) livita (liv/angel) M0(Monochromatic)alistor*quantum)
they were all sort of anti heros mono had D.I.D alistor was skitzo livita was addicted to angel juice (liquid crack cocaine)zexkon had no family and rya was insane they didn't fight for justice at all other than mono who kidnapped children from bad lives and killed them because he's nutzo
eventually these heros were all killed by the 14th ruler which il now explain the fallout was caused by a timer max set on the universe he told his fellow giants to destroy the universe when it ended however now nobody wanted to so they started fighting on whether or not to destroy it the giants became weaker {space and universe} until they were forced to retreat into human bloodlines afterwards there wasn't any magic and the universe started falling apart physics broke and laws were meaningless this would have caused if it weren't for the fact people are useless without a ruler it eventually became anarchy each of the 16 rulers had gotten abilities from a giant meaning they had magic here's some info about each one
1-eliz-she was 16 during the fallout and began using the universe giant of destruction to obliterate everything she was overtaken by
2-zinky bints-he was unborn when the fallout happened but was born with the universe giant of creation he ran a successful government but was killed by
3-monti-she was 22 when the fallout started she had the universe giant of life allowing her to rule for 4000 year until
4-alexi hems was born 4200 years after the fallout with the space giant of death he started a monarchy
he's the only known person to ever tame the death magic his monarchy rained another 10 lifetimes
5-he was born with the abilities of the giant of perception he ruled momentarily then died
6-she had the universe giant of time and use this to rule for another 5000 years but upon death was erased from reality
6-2-he was also born with the time giant and use his ability to rule for exactly 8 days then died
7-zapp she was born with zapp and use his abilities to shockwave threaten everyone she stepped in a puddle and died zapp wasn't able to spread through her bloodline as she had no kids but
7-2- found her corpse and cut her arm absorbing zache he was a evil person and ruled till he died of old age
8-hazels great great great great....etc granddaughter inherited the abilities and went nuts she ruled 20 years before leaping into a tear in reality her name was liyla
9-we don't speak of 9 they had the abilities of alit
10-there were no record of him it's mostly agreed he had the abilities of arcav
11-no records of her either she appears however to have been powerful legend tells she had the powers of bazil
12-turns out jordan fritz is considered a giant after death he became one and was absorbed by emilia rose they ruled 80 years fairly
13-same with lexa apparently anyone who came in contact with the gem became a giant after death (not max though niko harts wrote his entire life in a diary including the fact he contained lexa their storys odd though he claims they fell in love and fused into one being ruling for 40 years however other claims insist he was seduced then had his body taken over as lexa went on a spree of horrible things
14-the only non magic ruler his name was grits and he gained power by selling angel juice and leading a drug cartel the odd part? he was 11 yet alive during the fallout what happened was he 10 during the fallout his family was killed and he became violently angry he wandered the world for a year before coming across a fountain underground the created cherub juice when mixed with halo dust and angel juice after one sip though he discovered his mixture was to strong and was literally sent a couple thousand years in the future he searched the world another year found the fountain and began mass producing it eventually ruling the world eventually he died and was taken over by
15-lonin and evergreen ruled by making people fall in love with her she died when a blind asexual fought her
16-the final ruler also had a strange story you see his giant was actually jackpirates grandmother turns out she received the gem from a friends friend named you guessed it jordan fritz meaning she had touched it when she put it in her shop this also elect the possibility of jack or her friend touching it i get into that later though the final ruler and jacks grandma beth were really bad rulers not bad as in evil bad is in how the hell did you get so powerful they actually never killed anyone and drank a lot of tea they both died of old age
Then what? What about jack? What about the friend?
well let's get into it jack did touch the stone at one point but only the jackpirate from became a space giant guess which? i'll give you a hint there a pirate i've already mentioned nothing? well it was hazels
the friend was named roi and he well he became the reaper and gained the ability to go to the past convince jordan to give him the stone thus looping everything then the reaper went to the very beginning of time with a custom stone of life after death that max actually use to create the new reaper heaven corp and hell corp as well as limbo corp wait death in oddlands let's talk about that
What happens when I die? What happens when they die?
well to start when i die another version of me will run oddlands for 1000 years then nivag will take over for 10 years until its safe for coco to move back into the imaginary friend agency
after that the human bodies will perish and the anarchy period will begin
after that the universe won't be destroyed but life will cease to exist until 1000 years after the full end of kilot the giants will be resurrected and they will fuse together to become a final me il revamp the universe this won't be me but it will be the oddlands me he will imagine coco back into existence and reform oddlands in a new way i'm not sure what will happen next adventures? magic? joy? death? pain? but it will be incredible what happens after someone dies though let's pretend your a oddlandian
you get struck by lightning then what? well you meet the second grim or as he's called mr.grim who slices you and then you awake in an elevator where your deeds are calculated if your good your allowed one of three options volunteer for a job which will allow you more fulfillment
cease to exist
or go to "heaven" where you live in a good place style world that isn't actually secretly hell
if your bad your given two options
work and reduce your bad deeds thus lessing your time in hell (its like prison depending on what you did you may eventually be allowed to leave but it will take a long time either way and some are never allowed out) or just go to hell corp which is more like helluva boss hell not torture just chaos and violence if you die after death you cease to exist or in rare occasions become a higher employee which means you can no longerdie okay? what if your deeds are equal well first off your traumatized by your elevator amashing around before crashing in limbo where you get to live eternally limbo is a library infinite in each direction containing every possible life decision etc there's no bottom either though so you have to jump off platforms tied to the side of the book shelves the librarians insane and mr.grims ex wife so she will erase you from existence if she sees you that is unless you were a misscariage or baby there treated differently your deeds are not counted till two and upp so if you die you go to limbo she will raise you as 2 or youngers actually grow after death until there about 16-25 years old at which point they just stop then you can choose to live a life in the human world with no memory go to hell.corp or go to heaven corp nix and liz are misscarage given this opportunity they don't know and decide to hide from the limbo government they accidentally wipe out a group of historical figures by destroying a couple thousand bookshelves as punishment there sent to earth its done quickly though so the gov forgets to make them babsy and wipe there memories they plan world domination after there plotline with norman the move back into limbo what happens if you stay on your elevator when it lands in heaven or hell well it just continues downwards through a tunnel then explodes falling into an area on top of limbo {up isn't infinite}known as elevator hell it's just a pile of every elevator ever used plenty come in every second eventually you will either be crushed or the gov will find you
okay so do oddlandians know this?
some believe it some are atheist there are only 4 belief systems one the correct one two space giants are from another world where you go after death 3 nothing happens or four your reborn
okay so do they have like a massiah? Are there angels ? Is there a god and a satan ? What do these things look like? Do they go to oddlands? let's answer all that real quick 1.not really they have prince whose an angel that came and killed demons when they began infesting oddlands
2.yes they are biblically accurate angels and also non biblical its a mix
3.kind of the satan is whoever in hell is strongest but its not an official title
3.b.not really the same as ours he's a buissness man with a ball of light from a head he enjoys goldfish
4.already answered in the other questions
5.yes demons cause bad vibes but binnybinblacking or binny for short fights them binny was given powers by prince so know he lives forever fighting demons he has an eye on his forehead his hand his chest and and eye above each of his fingers that float around each eye allows him another ability when equipped through the eye on his chest binny although it's not important is jewish asa throw back to jesus and as a way to make oddlands diverse
each renamed the one on the back is pots
the one his right hand is slayer the one on the left is capture the one on his chest is absorb the one on his forehead is sight the ones on his fingers are as follow
index1-prickly
middle1-pain
ring1-levitate
index2-illusion
middle2-form
ring2-jump
guess what they do lol
let's go back a little bit
first some facts
canonically the solar system exists in kilot
there are aliens on the planets
all but a few
saturn is sentient
i also started playing fnaf in
2021 and made a ripoff known as clayman
the story took place of a video game series Clayman 1
clayman owns a candy shop were there are magical clay statues out for blood your the nightguard
Bots
Bixbie vents
Jerome doors
Kido doors and vents
Maximas right door
Clayman 2
The rumors destroyed his old store but he rebuilt it your the night guard
Bots
Joey doors
Karma vents
Laxs cam that he appears on off or he will attack
Mixer hiede in vent
Dj when his music play shut of power
Clayman 3 scrapped nightmares
In agony he scrapped it all in a warehouse where his clueless son works as night guard
You are clayman's son
Glitch
Clayman he wonders he peaceful
Dax he will enter your room if you don't reset generator once his brothers arrive you die
Pax he will enter your room if you don't close your doors when he's near by once his brothers arrive you die
Max he will enter your room if you don't hide in vent when he appears on cams once his brothers arrive you die
Dexter vents
Rex doors
Clayman the factory short film
Clayman discovers the government plans to end the world with his powers and they have kidnapped his grandson dave he saves dave and enters clayman's nightmares dave is helped to escape by jeff aka glitch and his dad fred falls into the clay molder glitch uses a force field but in the process gets stuck to dave is running from a villain and also falls into the force field the clay is killing the force field its been up for 2 days clayman wakes up and saves them but dies in the process he jumps in and absorbs it all this kills him after he dies his experiment the dark clay opens
Claymans nightmare
He gets buzz sawed in half after realizing he's horrible
The five people that originally died
Give him nightmares of what he did to them
The dark clay chronicles
Twister he cant enter through door because he has no arms its not locked though so close the vents
And the blobs close vent door or window there at
Scrap yard the finale
Years after everything the bots are sent to the woods and the clay merges with trash you get lost in the wood you must head out for wood to light your fires
Jake left light on
Eddy right light on
Shadow vent closed
Beast door closed
Dodo and scarp no noise when salvaging
Jimmy hide when he enters
Timmothy find toaster
Kimmy turn off lights
Jerry head into the woods
Nightmare when you hear him in the woods use flashlight quickly
Dream when you fall asleep she will visit just wake up
Also if you don't know when you dream you are attacked by the clay but pixelated so your goal is to die before they kill you so you wake up also during the day you go to the junkyard to get upgrades
the story is as follows
clayman makes clay statues using human souls to keep his candy shop popular
his children discover this
one gets thrown into a pot a mixes with another soul
the other gets amnesia then adopted then works for every company his biological father opens
he has a son the government uses the clay to plan a takeover
its take your kid to work day
the magic pot is absorbed by clay
he died
the dark clay became the nothingness
long ago a curse was set,every 1000 years a hero is born with a special ability,seeing another world known as kato a world of mystical things people there are faceless they roam amongst us using their abilities to interfere some powerful others less some kind other violent and vengangefull and mindless theses heros are known as prophets there job is to use their abilities to keep the worlds from merging till they die in 1000 years during the time there alive they age slower so they seem aged accurately
Gabe
a elderly man who offers wisdom to those who ask he runs an antique shop that ash frequents but he's hiding a secret he's aware of katos secrets and knew the past prophet he won't give his knowledge easily
Forest
a 16 year old childish wise slightly dark sense of humor and morbid tendencies she's quick and agile hard to befriend but once complete she's kind
Zula
a creature from kato she wears a faded pink dress with 1400s detailings and black heels she's wears faint white leggings and has a slight glow to her she also floats above the ground and doesn't speak but finds other ways to communicate
Ash
A 14 year old prophet living in the 20th is informed of his role by an elderly man in a dream soon after he begins noticing the creatures of kato and begins his mission1
Jacob
ashes brother who perished in a fire years before he was born watches ash grow up and is treated like an imaginary friend that ash talks to like he never died
during the fallout magic doesn't exist and a tsunami takes people to monster island
alerting super shadow of a world outside the island he goes out and fights crime caused by the lack of laws
another story takes place in a city near the coast of old town island known as clockwork kingdom
a boy named
james is a young orphan
he has been bullied by many but only recently he has begun to see a side to himself
he wants justice
but does it mean he needs to murder his bullies
well he thinks so he becomes a d.i.d patient
with like 15 personalities
he kills a bunch of folks
there's also a demon known as Tag which once summoned will chase you until spread on then kill you
recently learned noah owns a bag containing the nothingness\ as an explanation i've redone part of the lord the nothingness was defeated and obliterated by jack as usual and as jack became JACK PIRATE AND WENT TO ZORO but noah did not exist within the sword as usual he was forced into the human world with the nothingness bag and time watch
as another fix noah is an orphan and a genius who founded the time bomb factory which is un important but relevant nonetheless noah's stopwatch can protect him from death as well there's a demon chasing him (which is actually maxium in a form of existence on earth) to eliminate him the watch simply scares it off as it could eliminate time itself
Worldbuilding time
Everyone who can speak can speak English but most beings have a native language mushroom? Moshling we also have a coding language called kode taught at a huge computer shaped simulation bui;lding it uses 9 numbers and math symbols
1-if
2-then
3-positon
4-color?
5-set
6-customizable
7-create
8-destroy
9-edit
0-a/z reference a sprite
So for instance
1.(1=1)2.5.4-a.blue.
This means if 1=1 then set the color of sprite A to blue
Code is only use within the dome and can be use to alter reality within it you can also upload programs to the “internet”
Alrighty onto some lore revolving around the highly unstable dimension
It was recently discovered it's beyond huge
Dimensions like the oddlandian limbo are greater than anything ever conceived
So think that plus the other two afterlife's Zoro reality Bob etc every universe fused times 88 This universe is highly powerful and constantly forms reality s within itself these reality s are home to every story I've ever made not directly apart of oddlandian lore
Aswell I'm adding a new timeframe to oddlands
At some point
Before the fallout
Humans nearly went extinct
Because of eternal life
You see everything became perfect during the time of normviles rein
This caused a creation that allowed eternal life with the swallow of a pill
People who too this pill became tortured creatures slowly mutilating their own body's
Having kids would result in them being unable to die
Eventually a rocket filled with mutilated beings was set up by humans to launch into space at the crater
It combust the corpses of the other beings and mutilated unkillables fused creating fleshland which is basically a creator sized pile of flesh that consumes everything
Robots were made to burn the unkillables and dispose of there ashes but the robots became ai killing everything in sight
After pushing most unkillables into the other side
(Il talk about realities that aren't spoken of yet in a moment)
The humans also locked away the robots in the unstable reality causing a loop in which they seperated there universe from the unstable and became the one I forgot the name of they then breached war ripping through the reality of the past
Anyway as promised realitys I've yet to mention
The shadow realm
I briefly mentioned when talking about plasma I believe but you can enchant your shadow burning it into a portal
The shadow realm is a void of shadows failing to escape before sundown allows your shadow to take your place I. The humans world
Other side
Also known as kato
I really need to make maps
Anyway let's focus on some white space related content no universe can last forever without a system to keep it running
For instance arcav bazil and the other giants are usually considered the rulers of maximum which is why there ceasing to exist causes the outrage of reality
During the anarchy period
So they as a collective are the rulers
The white space is ruled by 1 for the most part
The ruler of Zoro is complicated
You see he was warped and became a escaper finding his way into oddlands where his inner skitzo went free roaming and he ran off into Bob
He made a deal with the forest his soul for an end to his misery he became apart of the forest and his essence became Zoro his skitzo was caused by his family tragic death which is why Zoro doesn't feel time
And doesn't need a ruler
The unstable reality was formed by the reaper before there was a functioning oddlandian afterlife everyone was kind of just dumped into one of the many realms
He had built a nice home where he survived the chaos in modern day elevator land so as a side project he began experimenting with traversing realitys (in search for his wife who praised him even though he was like totally a narcissistic rat)aswell he began practicing life and death magic these things combined resulting in him creating the unstable reality and releasing people's desires into it causing it to become what it is today
Il get more into the early reality soon
Bob is ruled strictly by Bob
It was formed when war corpses were buried underground and there souls and memories all fused together altering the forests existence into a greedy entity that created a pocket dimension that grew
War vets have always had reality crossing abilities since the crater and it's experiments had somehow affected the entire facility's and there staff permanently
So their corpses eventually moved from Bob into the dark realm. (I forgot to talk about it but it's the world Skelly's from ) they were the first to inhabit the world
Next up is the reality of time
This formed when kitzo (il talk about him when I talk about the early reality later)teamed up with the universe giant appointed with time to create it they created father time to rule it who again i'll talk about later
Finally (I think)
The Floridian realm
It was formed similar to how maximum was formed with its own lore
But basically a robot dictator looped reality to create his universe again
I'm struggling with creating a backstory to reality that does not impose on other religions so i'll just say kitzo was the living version of the white space and he as well as one worked together to build time and logic
Let's talk about organizations
There are 5 important organizations
1-IMAGIC
2-ISPX
3-SOCR
4-DTU
5-Numaries
The imagic and ispx are both groups that exist in the sky and are invisible in the human world
There are different versions of ispx each for a different color of the rainbow
So if its ispx yellow it's said
ISPY
These are all acronyms
imagic=international magic agency guys in clubs
ispy=international spy project yellow
SOCR is focused on fusing realitys
DTU is a group focused on destroying the universe
Numariaes are focused on killing everyone and harvesting the souls
Okay next topic i'm going to get into some more other reality things after but first we have what i'm guessing will be a large section titled the hereditary
So here's the gist of it there are 102 entities known as the hereditary when things are formed the good and bad separate when oddlands formed there was more good than bad however this is where things take a turn and become kind of complex you see
There are 3 versions of oddlands
4 if you include the weird time ring plot where everything reset
Well 4 if you count jacks reset
Okay there are 3 coexisting versions of oddlands
1-normal
2-reverse
3-chaos
Reverse is a timeline in which alit had a\managed to stop the universe from forming and things delved into chaos as
And chaos is a variant in which things constantly conflict they do and don't exist in multiple places at once so here's the deal in the reverse world the entities created were basically reverse g\space giants and they were as listed
There called the hereditaries because aftering forming kilto they immediately force absorbed themselves into human bloodlines
1-Alit-yep believe it or not alit is a hereditary-he t\wasn't created by max but managed to brainwash them into thinking he was
2-zek-an entity capable of immense electric ability-
3-plop-a watery bubble
4-lone-a teleporting black cube
5-buzz-a pile of electricity constantly reformi\eing looking at it will kill you
6-mouth-a mouth
7-kip-a all black humanoid cable of portal opening
8-torchered souls-a set of flaming souls looking at them will cause you to turn to dust
9-20-eye-it watches you it's cab\able of\ shooting a laser thats so powerful it can go through all of the earth and continue into space before dissolving
21-big worm-a big worm made of planet sized segments when curled becomes roughly the size of the milky way galaxy
21-32-a bunch of circles that alter your vision when looked at they constantly shoot spasms of darkness if theses spasms touch anything they pull it into the void and the entity may shoot it back out when it is in danger,sick,or to full
33-glasso-he's an ever changing pile of molten glass
34-pile-a pile of darkness with unknown properties
35-hat-a mouse with razor sharp teeth and immense speed
36-null-a orb with arms stretching out of it contact kills you
37-rock-if you see a strange rock…no you don't
38-creeper-a 8 foot tall all dark entity with pointed arms thin kitty from the backrooms
It watches you pretend its not there and your fine it will try to scare you or get your attention in hopes of a response noticing it results in your ceasing to exist
39-pain-a red circle filled with death touching it results in you being put in a void realm where your unconscious mind exists solely you will be forced to wander the endless void and every 10 days you will experience a slightly worse pain after thousands of insanity filled years you will be in so much pain you can no longer move and it will only get worse for eternity
The solution? There's an exit somewhere in the infinite void it could be near or far
The realm resets everyday so if anyone touches the pain they will be put in the same void as you but after that day nobodys there to help you
40-60-headlines-there humanoids who will attempt to fool you most can't talk and each has a unique object for a head
Listed below
A Tv
A Computer
A Pc
A Microwave
A Toaster
A Ax
A Steering Wheel
A Chandler
aCellphone
A deformed face
A house
A gun
A machine gun
A trash can
A cube
A paper towel
A normal head
A water jug
A eye
61-70
A shapeshifting object if you look at it it combusts
70-100
A circle
They fuse into complex things
101-the juggler-it randomly and constantly changes form attempting to understand or fully see it will kill you painfully he has a red green blue and black ball as well as an eye that float around his head he will offer a game you have only 3 choices quote them exactly as written or he will kill you
1-Id like to play that game
He will proceed to play the game you lose he will punish you by trapping you in a void,doing something horrible to you,or making you cease to exist
2-i don't think i want to play that
He will offer another game you may only do this 3 times
3-i might want to play that game
He will offer another game after doing this 5 times he will do something horrible to you
102-the collector-it's a orb with a pole sticking out in each direction aait began randomly moving but like an ai it evolved collecting something gets it a good point failing to collect something makes it sad it has now learned to chase after things at insane rates using its 3 senses hearing vibrations and heat detection it collects living or previously living things it collects by contacts an object with a pole
So what's the point of all this well during the anarchy period in main oddlands quite the opposite happened they activated themselves inside there hosts and began reaping havoc
Okay as promised
Other reality timelines
Floridian
Robot B16 was a dictator who formed it with a life generating gem he then waited and eventually gave it to his past self
Zoro
Time doesn't exist so idk
Bob
Same as zoro
Reality
That's a lot im not writing
Crap nothing really has time other than oddlands floridian and earth
Okay new topic
The islands of kilot
In Between OTC and mainland is a island for each space giant shaped like their initial and filled with things related to their existence
No other spoilers
Okay new new topic
Other side of kilot
Confused?
Yeh me to
Well here's the thing all my maps show one side of kilot with every content on it
But the planets 3D
So uhh…
There's a stupid easy answer
It's a unfolded copy of the world so you can see everything when folded its cool and normal
Anyway here are some maps
Let's get into some more specific characters
Dr.Carbon And Ms.O
He unlike most other humans was from the Floridian Realm
And when he finished his reign of terror against the government began
Carbon studied history,technology,chemistry,and psychology in college
He gathered a group of conspiracy theorists together to overtake the government while they stormed the building he fired up there portal machine and was pulled through into a new place not on any map previously made
Opposite world
You see as i said early there are 3 versions of oddlands but what about reality
You see it just so happens that as carbon was tossed through the white space he landed in just the right spot after hitting just the right universes to land in opposite world a version of reality where everything is completely opposite
Think of it as an unused file in a video game its
Inaccessible to most and extremely unstable and difficult to access
Previously things hadn't actually existed so him entering formed a paradox in which the universe had to get rid of him to make sure it was still the exact opposite of are world zepto seconds before ceasing to exist he shouted THE UNIVERSE ISN'T WHITE
Thin opposite land began comprehending the statement
The opposite of a universe is nothing hence the darkness but nothing can't exist so its a pure void of dark this creates a paradox and paradoxes exist in our world
So they can hear this jumpstarts the universe into existence as it begins following a thought process leading to an actual universe of people carbon then summons a portal gun and evacuates the world with a piece of summoning equipment he stole from the opposite realm
When he returns to the white space he continues his actual missio
Mass destruction he first visited oddlands gathering the strongest equipment he could
The crystals of Luminari (il speak about her later) he went to zoro and obtained the sword of seas he went to time and stole the watch of eternity he went to the human world and stole a ballistic missile containing a blackhole that had been fit inside a marble attached to a string like a necklace he was unable to find any other realms
With his new equipment he returned to tompto (the floridian realm) and immediately began attacking he impersonated a guard and released viruses on his home this is when the government snapped the created Project Oxygeneral destruction remover or Ms O for short a human floridian crossbreed filled with one goal
The destruction of dr.carbon
They fought
She was seconds away from killing him when he smashed the marble attached to his necklace the blackhole tore through there flesh pulling it through the vortex ate roughly 88% of the universe and then just ceased to exist this isn't the end but first i'll tell you about Luminari
She's the opposite of kitzo implying the existence of kitzos stones and a dark reality which again i'll talk about later
Here's the deal
She created 4 stones of all her power
1-void-the void stone is a black cube that when pressure applied teleports its user
2-pocket-pocket is a circular purple disk with an X in the center by touching it you gain access to a void dimension
3-null-the null crystal is a dark red ever changing shape that can warp reality
4-time-a watch with a time altering ability focused on timelines
Okay so what happened after they went through the vortex well there flesh fused and they eventually appeared in the white space again with no clue how they became known as Notimoto
Or timoto they created a realm know as flesh land a universe purely of flesh and they became apart of it
Okay let's get back to the content of the opposite reality it's the place where opposite oddlands and the hereditary reside that's basically all you need to know however i did mention the stones of kitzo and need to talk about the anti null
So kitzo made 3 stones rather than 4 so…
1-speed-its a yellow stone with a orange glow it alters the speed of things
2-teli-it's dark purple with a pinkish glow iit offers the ability of telekinesis
3-mind-its pink with a black X iit grants its user the access to the mind realm
Okay let's really quickly talk about the Anti Null
Its white and instead of creating and destroying realitys it just created tiny rocks of magic disperses them to random places
3 MORE CHARACTER SETS!!
Dunny thunder and Alexis Techsmo
Dunny was basically a deadbeat chef living in an abandoned apartment complex when he found a box containing a robot girl named alexis at first they became friend but she went berserk and destroyed the building locking him deep within a bunker where he sat for an unknown amount of time in complete darkness and a lack of sound after his eardrums shattered whilst entering the bunker via falling 6 stories and being blocked within it by falling rubble after a while he began doing things a specific way every time he was awake he would drink a bottle of water eat a can of raviolis mark how many times he had fallen asleep on the wall and write in his journal
He was never found alive but when his body was eventually found the journal was covered in vines the government took his body and tossed it into a cell eater and when they finished cutting the journal out they read it the first 100 of the 500 pages were literally random scribbles the next 100 was all math the third 100 had each word written in a different code or language and was scrambled about the fourth 100 was images and words that didn't make sense and the next 50 pages were just eyes over and over with the words abandon reality written over an over behind it the final 28 pages (the rest were blank or to torn/burned to read) well the fun part is nobody knows it appears to be a timeline of another world written in gibberish
As for alexis she left the apartment killed 607 people and was tackled by a bunch of teen girls and boys who pushed her into a nearby lake these kids are known as the 8 and deserve to be remembered as they were almost all killed in doing this act of heroism
1-lilly hatchet-a human
2-dato mushrooming-a mushroomling
3-card to-a human
4-yotu-a flying menace
5-popi-a HB
6-Tex-a turtle
7-a unrecognizable winged girl
8-ms oval-a dragonfly girl
The 3 that survived are ms oval,youtu(father of mazo)and tex the turtle
Character set two
Definitions
The pronouns strut/kit/itself
Reference a 3 gender defined as having a orb floating nearby for reproductive matters
The strange Mr.stockings,
A being with a scribble for a head known to be the most powerful thing conceivable it's smarter than the combined intelligence of everyone of every universe with a total IQ of a 486 duodecillion digit number
And can destroy most living just by thinking about it
Luckily it's a pacifist and the sole survivor of a race called the numits
For size comparison a space giant is
The size of mt everestx3
A universe giant is the size of the observable universe x 88 (big)
And the foot of a numit is 426 times bigger
In total there used to be 22 numits they were the friends of kitzo
But only 5 have records
1.stokkings -(I already talked about)-
2.nagot-(his head is a ball of fire and he's a friendly creature)-
3.truffo-(kit is a creature made solely of random objects with a huge eye on top)-
4.magite-(she's made of -84727636371992 dimensional shapes that's constantly warp)-
5.gorthon-(kit is a giant black creature with a black camera like cube on trut head kit only goal is recording all of everything)-
So what happened and whys only one alive
The fall
One day for know known reason
The creatures became psychotic
1 by 1 starting with magite she began attacking the others either infecting or killing them and destroying gorthons records most people believe she had been restless and became displeased with her power compared to her peers and it corrupted her the virus spread mutilating them into three creatures (except stockings)
1-hellskee-a essence of anger
2-darkmore-a essence of violence
3-truy- a essence of depression
Final set
Emily and violet
Emily is a pansexual 23 year old college student and violet is a ex hitop
(The group responsible for the lack of diversity in oddlands)
They preached inclusivity
Until one day they were eaten by a vortex and fused together and returned to oddlands to obliterate it but failed and were killed
Alrighty i think it's roughly time i talk about some creatures
Mushrooms-Shroomlings are popular due to there immense diversity and power relating to nature
Spiderling-demonic spider humans who hunt down children's shadows
Mimidosa-they have 6 eyes each with a different ability
1-light
2-fire
3-ice
4-time
5-electricity
6-levitation
They camouflage in unpopulated areas and attack people who get close pulling them into the center eye
Lunidor-they come out at night or in the dark but usually are invisible they shoot fire and fly through the air hunting in huge packs and devouring everything
Aginostica-blind humanoids that live in highly advanced underground cities and use an antenna and orb to sense the world around them
Mertainia-ghostly mermaid like entity's that lure in nearby mortals to be eaten they can grow infinitely
Kukinami-chain pickaxe wielding knights
Trufica-a humanoid with an eye for a head they have physic abilities
Nuni-one color humanoids with a elemental ability
Kotica-extremely weak but durable creates consisting of a humanoid all black form and a coconut head
Vampire-self explanatory
They can morph turn invisible bite others there agile and ageless and they are immune to fire sometimes has elemental or Telekinetic abilities dark vision light and garlic allergy
Halfpire-can morph and float ages slowly and can see in the dark sometimes has elemental or Telekinetic abilities light or galleria allergy
Fourthipire-sometimes Telekinetic extremely agile
Ponko-mini cheese hungry cat humanoids
Glampire-allergic to darkness can create illusions can fly
Demons-usually one of two types either a specific entity from hell.corp section Z known for dark properties or two a humanoid hell walker with abilities in the dark arts
Usually any other kinds are to miniscule to matter
Heldink-a cross breed of human and demon usually capable of bending fire and other arrays of abilities it's a 96 to 4 ratio of women to men
Angels-humans with wings and the ability to bend light use telekinesis and open portals/teleport
Limbolings-
Flame mate-a humanoid made purely of fire Usually female capable of bending fire summoning fire flying teleporting exploding things heating things up ect
Flare-a human wielding the ability to become a flame mate
Torch- a human with weaker versions of the flame mate
Coal-someone who upon contact with a flame mate becomes one
Beast-there are two things this refers to one beast island and island where beasts fight think thin cubey incineroar but with different abilities for each one or two beast corp which is basically SCP
Powerplants-they come in 4 variants each with sub variants
Variant one-powerplant-evil entity's that grow in the land of Bob once fully grow. They become evil and gain an ability there skin is usually pale but can be dark
Subvariant 1-due to a rare disease some develop and blueish/greenish/reddish tinted beige skin color and become good with the ability to become a rage variant
Variant 2-onecolor-they have one of the 4 abilitys and a matching skin tone
Variant 3-a rage-rages are red and harness fire to destroy your everything there neither good nor bad solely violent and powerful
Variant 4-a shadow-a dark 5th ability weilder who is half shadow half powerplant
The four abilitys are
Fire
Energy
Earth crfde
Telikenis
And more rarely
Shadow
Null(none)
Brain
And finally reality
Logic of other minds:everyone's mind is a place based on what there like and what they enjoy
Okay so here's what's up
Recently ive been thinking about the idea of multiverses
Throughout oddlandian lore
The white space and universes contained in it are now can only only one of three white spaces A(ours)B and C
B is a huge universe where space travel is completely normal but it's the only multiverse to exist
C however contains infinite realms each slightly different from the one before but there's a universe transitioning government
More stuff!
|
Fynd/cyclops_llamav2_13b_2_ep_intent | Fynd | 2024-01-24T01:05:58Z | 1 | 0 | peft | [
"peft",
"safetensors",
"region:us"
] | null | 2024-01-24T01:05:50Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0
|
stablediffusionapi/anime-art-diffusion-xl | stablediffusionapi | 2024-01-24T00:54:53Z | 48 | 1 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-01-24T00:52:00Z | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# Anime Art Diffusion XL API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "anime-art-diffusion-xl"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/anime-art-diffusion-xl)
Model link: [View model](https://modelslab.com/models/anime-art-diffusion-xl)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "anime-art-diffusion-xl",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
Bsbell21/llm_instruction_generator_mistral | Bsbell21 | 2024-01-24T00:44:02Z | 73 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-01-24T00:39:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tiagoblima/t5_base-qg-ap-test | tiagoblima | 2024-01-24T00:43:00Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:tiagoblima/du-qg-squadv1_pt",
"base_model:unicamp-dl/ptt5-base-portuguese-vocab",
"base_model:finetune:unicamp-dl/ptt5-base-portuguese-vocab",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-01-17T10:32:42Z | ---
license: mit
base_model: unicamp-dl/ptt5-base-portuguese-vocab
tags:
- generated_from_trainer
datasets:
- tiagoblima/du-qg-squadv1_pt
model-index:
- name: t5_base-qg-ap-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5_base-qg-ap-test
This model is a fine-tuned version of [unicamp-dl/ptt5-base-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-base-portuguese-vocab) on the tiagoblima/du-qg-squadv1_pt dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0163
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | 12.8054 |
| No log | 2.0 | 2 | 10.7880 |
| No log | 3.0 | 3 | 8.8731 |
| No log | 4.0 | 4 | 7.4068 |
| No log | 5.0 | 5 | 6.4581 |
| No log | 6.0 | 6 | 5.6475 |
| No log | 7.0 | 7 | 4.9596 |
| No log | 8.0 | 8 | 4.5058 |
| No log | 9.0 | 9 | 4.0768 |
| No log | 10.0 | 10 | 3.7047 |
| No log | 11.0 | 11 | 3.4143 |
| No log | 12.0 | 12 | 3.1360 |
| No log | 13.0 | 13 | 2.8866 |
| No log | 14.0 | 14 | 2.6325 |
| No log | 15.0 | 15 | 2.3889 |
| No log | 16.0 | 16 | 2.1914 |
| No log | 17.0 | 17 | 2.0424 |
| No log | 18.0 | 18 | 1.9111 |
| No log | 19.0 | 19 | 1.7763 |
| No log | 20.0 | 20 | 1.6505 |
| No log | 21.0 | 21 | 1.5257 |
| No log | 22.0 | 22 | 1.4126 |
| No log | 23.0 | 23 | 1.3109 |
| No log | 24.0 | 24 | 1.2189 |
| No log | 25.0 | 25 | 1.1338 |
| No log | 26.0 | 26 | 1.0486 |
| No log | 27.0 | 27 | 0.9640 |
| No log | 28.0 | 28 | 0.8828 |
| No log | 29.0 | 29 | 0.8060 |
| No log | 30.0 | 30 | 0.7329 |
| No log | 31.0 | 31 | 0.6639 |
| No log | 32.0 | 32 | 0.6010 |
| No log | 33.0 | 33 | 0.5439 |
| No log | 34.0 | 34 | 0.4925 |
| No log | 35.0 | 35 | 0.4471 |
| No log | 36.0 | 36 | 0.4066 |
| No log | 37.0 | 37 | 0.3690 |
| No log | 38.0 | 38 | 0.3341 |
| No log | 39.0 | 39 | 0.3023 |
| No log | 40.0 | 40 | 0.2746 |
| No log | 41.0 | 41 | 0.2470 |
| No log | 42.0 | 42 | 0.2205 |
| No log | 43.0 | 43 | 0.1968 |
| No log | 44.0 | 44 | 0.1771 |
| No log | 45.0 | 45 | 0.1593 |
| No log | 46.0 | 46 | 0.1424 |
| No log | 47.0 | 47 | 0.1288 |
| No log | 48.0 | 48 | 0.1170 |
| No log | 49.0 | 49 | 0.1070 |
| No log | 50.0 | 50 | 0.0996 |
| No log | 51.0 | 51 | 0.0939 |
| No log | 52.0 | 52 | 0.0888 |
| No log | 53.0 | 53 | 0.0845 |
| No log | 54.0 | 54 | 0.0818 |
| No log | 55.0 | 55 | 0.0790 |
| No log | 56.0 | 56 | 0.0763 |
| No log | 57.0 | 57 | 0.0732 |
| No log | 58.0 | 58 | 0.0697 |
| No log | 59.0 | 59 | 0.0666 |
| No log | 60.0 | 60 | 0.0642 |
| No log | 61.0 | 61 | 0.0611 |
| No log | 62.0 | 62 | 0.0583 |
| No log | 63.0 | 63 | 0.0560 |
| No log | 64.0 | 64 | 0.0532 |
| No log | 65.0 | 65 | 0.0512 |
| No log | 66.0 | 66 | 0.0487 |
| No log | 67.0 | 67 | 0.0464 |
| No log | 68.0 | 68 | 0.0431 |
| No log | 69.0 | 69 | 0.0399 |
| No log | 70.0 | 70 | 0.0381 |
| No log | 71.0 | 71 | 0.0364 |
| No log | 72.0 | 72 | 0.0348 |
| No log | 73.0 | 73 | 0.0333 |
| No log | 74.0 | 74 | 0.0316 |
| No log | 75.0 | 75 | 0.0299 |
| No log | 76.0 | 76 | 0.0285 |
| No log | 77.0 | 77 | 0.0274 |
| No log | 78.0 | 78 | 0.0264 |
| No log | 79.0 | 79 | 0.0253 |
| No log | 80.0 | 80 | 0.0242 |
| No log | 81.0 | 81 | 0.0236 |
| No log | 82.0 | 82 | 0.0231 |
| No log | 83.0 | 83 | 0.0229 |
| No log | 84.0 | 84 | 0.0226 |
| No log | 85.0 | 85 | 0.0223 |
| No log | 86.0 | 86 | 0.0218 |
| No log | 87.0 | 87 | 0.0212 |
| No log | 88.0 | 88 | 0.0205 |
| No log | 89.0 | 89 | 0.0198 |
| No log | 90.0 | 90 | 0.0192 |
| No log | 91.0 | 91 | 0.0186 |
| No log | 92.0 | 92 | 0.0181 |
| No log | 93.0 | 93 | 0.0177 |
| No log | 94.0 | 94 | 0.0173 |
| No log | 95.0 | 95 | 0.0170 |
| No log | 96.0 | 96 | 0.0168 |
| No log | 97.0 | 97 | 0.0166 |
| No log | 98.0 | 98 | 0.0165 |
| No log | 99.0 | 99 | 0.0164 |
| 1.4009 | 100.0 | 100 | 0.0163 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
dctanner/sablo-pebble-mistral-dpo-lora-HelpSteer_binarized | dctanner | 2024-01-24T00:40:19Z | 8 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"mistral",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"dataset:sablo/HelpSteer_binarized",
"base_model:sablo/sablo-pebble-mistral",
"base_model:adapter:sablo/sablo-pebble-mistral",
"license:apache-2.0",
"region:us"
] | null | 2024-01-18T15:14:46Z | ---
license: apache-2.0
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- sablo/HelpSteer_binarized
base_model: sablo/sablo-pebble-mistral
model-index:
- name: sablo-pebble-mistral-dpo-lora-HelpSteer_binarized
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sablo-pebble-mistral-dpo-lora-HelpSteer_binarized
This model is a fine-tuned version of [sablo/sablo-pebble-mistral](https://huggingface.co/sablo/sablo-pebble-mistral) on the sablo/HelpSteer_binarized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5371
- Rewards/chosen: -0.9335
- Rewards/rejected: -1.6455
- Rewards/accuracies: 0.7264
- Rewards/margins: 0.7121
- Logps/rejected: -298.0735
- Logps/chosen: -253.4149
- Logits/rejected: -2.4554
- Logits/chosen: -2.5093
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6874 | 0.1 | 100 | 0.6892 | 0.0213 | 0.0133 | 0.6698 | 0.0080 | -132.1924 | -157.9395 | -2.4463 | -2.4843 |
| 0.6592 | 0.2 | 200 | 0.6594 | 0.0055 | -0.0704 | 0.6698 | 0.0759 | -140.5588 | -159.5180 | -2.4922 | -2.5370 |
| 0.5451 | 0.3 | 300 | 0.5867 | -0.4490 | -0.7587 | 0.6863 | 0.3097 | -209.3938 | -204.9713 | -2.5128 | -2.5620 |
| 0.4933 | 0.39 | 400 | 0.5591 | -0.6060 | -1.1029 | 0.7146 | 0.4968 | -243.8062 | -220.6713 | -2.4868 | -2.5386 |
| 0.5271 | 0.49 | 500 | 0.5488 | -0.6712 | -1.2738 | 0.7193 | 0.6026 | -260.8958 | -227.1889 | -2.4784 | -2.5312 |
| 0.4594 | 0.59 | 600 | 0.5418 | -0.7977 | -1.4672 | 0.7311 | 0.6695 | -280.2420 | -239.8430 | -2.4672 | -2.5200 |
| 0.5444 | 0.69 | 700 | 0.5358 | -0.7688 | -1.4528 | 0.7335 | 0.6840 | -278.8014 | -236.9531 | -2.4594 | -2.5127 |
| 0.5755 | 0.79 | 800 | 0.5405 | -1.0672 | -1.7631 | 0.7311 | 0.6959 | -309.8293 | -266.7906 | -2.4585 | -2.5118 |
| 0.5495 | 0.89 | 900 | 0.5371 | -0.9321 | -1.6450 | 0.7288 | 0.7129 | -298.0242 | -253.2804 | -2.4558 | -2.5096 |
| 0.5948 | 0.98 | 1000 | 0.5371 | -0.9335 | -1.6455 | 0.7264 | 0.7121 | -298.0735 | -253.4149 | -2.4554 | -2.5093 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.6
- Tokenizers 0.15.0 |
varun-v-rao/t5-base-snli | varun-v-rao | 2024-01-24T00:38:08Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text-classification",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-01-21T04:13:23Z | ---
license: apache-2.0
base_model: t5-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: t5-base-snli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-snli
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2842
- Accuracy: 0.8982
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3813 | 1.0 | 2146 | 0.3113 | 0.8875 |
| 0.3443 | 2.0 | 4292 | 0.2864 | 0.8966 |
| 0.3305 | 3.0 | 6438 | 0.2842 | 0.8982 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
sally9805/bert-base-uncased-finetuned-coha-1900s | sally9805 | 2024-01-24T00:36:23Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-01-22T23:15:04Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-coha-1900s
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-coha-1900s
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5479
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.7671 | 1.0 | 22219 | 2.5899 |
| 2.7099 | 2.0 | 44438 | 2.5504 |
| 2.7271 | 3.0 | 66657 | 2.5498 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
oosij/llama-2-ko-7b-ft-emo-multi | oosij | 2024-01-24T00:22:17Z | 0 | 0 | peft | [
"peft",
"base_model:beomi/llama-2-ko-7b",
"base_model:adapter:beomi/llama-2-ko-7b",
"region:us"
] | null | 2024-01-24T00:18:49Z | ---
library_name: peft
base_model: beomi/llama-2-ko-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
아직 Study 중인 멀티 턴 챗봇 모델.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
<Prompt Template>
이전 대화와 현재 대화의 명령어를 참고하여 상황에 공감하고 친절한 응답을 생성해주세요. 응답 마지막에는 지금까지의 내용과 관련된 질문을 해주세요.
[이전 대화]
{}
[현재 대화]
### 명령어:
{}
### 응답:
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.2
|
tali1/autotrain-gpt2-gpu3 | tali1 | 2024-01-24T00:11:18Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"autotrain",
"text-generation",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-24T00:11:17Z | ---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
yoon1000/ft_0124_korean_1 | yoon1000 | 2024-01-24T00:11:16Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-01-23T07:08:34Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
model-index:
- name: ft_0124_korean_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ft_0124_korean_1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4593
- Cer: 0.1067
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 33.156 | 0.44 | 500 | 10.0563 | 1.0 |
| 4.9299 | 0.88 | 1000 | 4.8856 | 1.0 |
| 4.6283 | 1.33 | 1500 | 4.5959 | 1.0 |
| 4.4245 | 1.77 | 2000 | 4.2900 | 0.9513 |
| 3.8155 | 2.21 | 2500 | 2.7733 | 0.5324 |
| 2.6597 | 2.65 | 3000 | 2.0091 | 0.4216 |
| 2.1347 | 3.09 | 3500 | 1.5842 | 0.3535 |
| 1.7847 | 3.53 | 4000 | 1.3425 | 0.3124 |
| 1.6031 | 3.98 | 4500 | 1.1478 | 0.2750 |
| 1.3867 | 4.42 | 5000 | 0.9914 | 0.2466 |
| 1.2552 | 4.86 | 5500 | 0.8959 | 0.2258 |
| 1.1442 | 5.3 | 6000 | 0.8326 | 0.2123 |
| 1.0747 | 5.74 | 6500 | 0.7708 | 0.2053 |
| 0.985 | 6.18 | 7000 | 0.7137 | 0.1864 |
| 0.921 | 6.63 | 7500 | 0.6822 | 0.1818 |
| 0.8817 | 7.07 | 8000 | 0.6435 | 0.1716 |
| 0.8043 | 7.51 | 8500 | 0.6338 | 0.1692 |
| 0.7938 | 7.95 | 9000 | 0.6075 | 0.1613 |
| 0.7296 | 8.39 | 9500 | 0.5844 | 0.1578 |
| 0.7061 | 8.83 | 10000 | 0.5695 | 0.1533 |
| 0.6566 | 9.28 | 10500 | 0.5695 | 0.1478 |
| 0.6452 | 9.72 | 11000 | 0.5346 | 0.1439 |
| 0.6178 | 10.16 | 11500 | 0.5184 | 0.1404 |
| 0.5887 | 10.6 | 12000 | 0.5152 | 0.1360 |
| 0.5739 | 11.04 | 12500 | 0.5062 | 0.1356 |
| 0.5338 | 11.48 | 13000 | 0.5135 | 0.1321 |
| 0.5391 | 11.93 | 13500 | 0.5021 | 0.1316 |
| 0.4964 | 12.37 | 14000 | 0.4924 | 0.1269 |
| 0.4959 | 12.81 | 14500 | 0.4860 | 0.1262 |
| 0.4731 | 13.25 | 15000 | 0.4893 | 0.1227 |
| 0.4651 | 13.69 | 15500 | 0.4718 | 0.1204 |
| 0.4446 | 14.13 | 16000 | 0.4815 | 0.1180 |
| 0.4175 | 14.58 | 16500 | 0.4780 | 0.1189 |
| 0.4249 | 15.02 | 17000 | 0.4678 | 0.1163 |
| 0.4073 | 15.46 | 17500 | 0.4599 | 0.1141 |
| 0.3948 | 15.9 | 18000 | 0.4676 | 0.1136 |
| 0.3795 | 16.34 | 18500 | 0.4656 | 0.1119 |
| 0.3807 | 16.78 | 19000 | 0.4642 | 0.1100 |
| 0.3675 | 17.23 | 19500 | 0.4661 | 0.1108 |
| 0.3609 | 17.67 | 20000 | 0.4589 | 0.1086 |
| 0.3454 | 18.11 | 20500 | 0.4645 | 0.1088 |
| 0.3451 | 18.55 | 21000 | 0.4570 | 0.1076 |
| 0.3496 | 18.99 | 21500 | 0.4555 | 0.1072 |
| 0.3327 | 19.43 | 22000 | 0.4619 | 0.1075 |
| 0.334 | 19.88 | 22500 | 0.4593 | 0.1067 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
liminerity/Ingot-7b-slerp-6 | liminerity | 2024-01-24T00:01:41Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"liminerity/Ingot-7b-slerp-2",
"liminerity/Ingot-7b-slerp-4",
"base_model:liminerity/Ingot-7b-slerp-2",
"base_model:merge:liminerity/Ingot-7b-slerp-2",
"base_model:liminerity/Ingot-7b-slerp-4",
"base_model:merge:liminerity/Ingot-7b-slerp-4",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-23T23:51:06Z | ---
tags:
- merge
- mergekit
- lazymergekit
- liminerity/Ingot-7b-slerp-2
- liminerity/Ingot-7b-slerp-4
base_model:
- liminerity/Ingot-7b-slerp-2
- liminerity/Ingot-7b-slerp-4
---
# Ingot-7b-slerp-6
Ingot-7b-slerp-6 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [liminerity/Ingot-7b-slerp-2](https://huggingface.co/liminerity/Ingot-7b-slerp-2)
* [liminerity/Ingot-7b-slerp-4](https://huggingface.co/liminerity/Ingot-7b-slerp-4)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: liminerity/Ingot-7b-slerp-2
layer_range: [0, 32]
- model: liminerity/Ingot-7b-slerp-4
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/Ingot-7b-slerp-2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "liminerity/Ingot-7b-slerp-6"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
onnxruntime/sd-turbo | onnxruntime | 2024-01-23T23:56:54Z | 0 | 2 | null | [
"onnx",
"stable-diffusion",
"sdxl",
"onnxruntime",
"text-to-image",
"en",
"base_model:stabilityai/sd-turbo",
"base_model:quantized:stabilityai/sd-turbo",
"license:other",
"region:us"
] | text-to-image | 2024-01-19T22:42:27Z | ---
pipeline_tag: text-to-image
license: other
license_name: sai-nc-community
license_link: https://huggingface.co/stabilityai/sd-turbo/blob/main/LICENSE.TXT
base_model: stabilityai/sd-turbo
language:
- en
tags:
- stable-diffusion
- sdxl
- onnxruntime
- onnx
- text-to-image
---
# Stable Diffusion Turbo for ONNX Runtime CUDA
## Introduction
This repository hosts the optimized ONNX models of **SD Turbo** to accelerate inference with ONNX Runtime CUDA execution provider for Nvidia GPUs. It cannot run in other providers like CPU and DirectML.
The models are generated by [Olive](https://github.com/microsoft/Olive/tree/main/examples/stable_diffusion) with command like the following:
```
python stable_diffusion.py --provider cuda --model_id stabilityai/sd-turbo --optimize --use_fp16_fixed_vae
```
See the [usage instructions](#usage-example) for how to run the SDXL pipeline with the ONNX files hosted in this repository.
## Model Description
- **Developed by:** Stability AI
- **Model type:** Diffusion-based text-to-image generative model
- **License:** [STABILITY AI NON-COMMERCIAL RESEARCH COMMUNITY LICENSE](https://huggingface.co/stabilityai/sd-turbo/blob/main/LICENSE)
- **Model Description:** This is a conversion of the [SD-Turbo](https://huggingface.co/stabilityai/sd-turbo) model for [ONNX Runtime](https://github.com/microsoft/onnxruntime) inference with CUDA execution provider.
## Performance
#### Latency
Below is average latency of generating an image of size 512x512 using NVIDIA A100-SXM4-80GB GPU:
| Engine | Batch Size | Steps | ONNX Runtime CUDA |
|-------------|------------|------ | ----------------- |
| Static | 1 | 1 | 38.2 ms |
| Static | 4 | 1 | 120.2 ms |
| Static | 1 | 4 | 68.7 ms |
| Static | 4 | 4 | 192.6 ms |
Static means the engine is built for the given batch size and image size combination, and CUDA graph is used to speed up.
## Usage Example
Following the [demo instructions](https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/python/tools/transformers/models/stable_diffusion/README.md#run-demo-with-docker). Example steps:
0. Install nvidia-docker using these [instructions](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html).
1. Clone onnxruntime repository.
```shell
git clone https://github.com/microsoft/onnxruntime
cd onnxruntime
```
2. Download the ONNX files from this repo
```shell
git lfs install
git clone https://huggingface.co/tlwu/sd-turbo-onnxruntime
```
3. Launch the docker
```shell
docker run --rm -it --gpus all -v $PWD:/workspace nvcr.io/nvidia/pytorch:23.10-py3 /bin/bash
```
4. Build ONNX Runtime from source
```shell
export CUDACXX=/usr/local/cuda-12.2/bin/nvcc
git config --global --add safe.directory '*'
sh build.sh --config Release --build_shared_lib --parallel --use_cuda --cuda_version 12.2 \
--cuda_home /usr/local/cuda-12.2 --cudnn_home /usr/lib/x86_64-linux-gnu/ --build_wheel --skip_tests \
--use_tensorrt --tensorrt_home /usr/src/tensorrt \
--cmake_extra_defines onnxruntime_BUILD_UNIT_TESTS=OFF \
--cmake_extra_defines CMAKE_CUDA_ARCHITECTURES=80 \
--allow_running_as_root
python3 -m pip install build/Linux/Release/dist/onnxruntime_gpu-*-cp310-cp310-linux_x86_64.whl --force-reinstall
```
If the GPU is not A100, change CMAKE_CUDA_ARCHITECTURES=80 in the command line according to the GPU compute capacity (like 89 for RTX 4090, or 86 for RTX 3090). If your machine has less than 64GB memory, replace --parallel by --parallel 4 --nvcc_threads 1 to avoid out of memory.
5. Install libraries and requirements
```shell
python3 -m pip install --upgrade pip
cd /workspace/onnxruntime/python/tools/transformers/models/stable_diffusion
python3 -m pip install -r requirements-cuda12.txt
python3 -m pip install --upgrade polygraphy onnx-graphsurgeon --extra-index-url https://pypi.ngc.nvidia.com
```
6. Perform ONNX Runtime optimized inference
```shell
python3 demo_txt2img.py \
"starry night over Golden Gate Bridge by van gogh" \
--version sd-turbo \
--engine-dir /workspace/sd-turbo-onnxruntime
```
|
h2m/BurningBruce-004-4x7b | h2m | 2024-01-23T23:53:24Z | 15 | 1 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"arxiv:2101.03961",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-20T09:51:48Z | 
# A frankenMoE of 4 merged models
BurningBruce is a codename given to models created by members of Convex. Our purpose is to try our hand at making the most well-rounded models possible without the hassle of building and maintaining hundreds of thousands of dollars' worth of equipment.
We will be sending Bruce through many different iterations, hopefully each one improving upon the last.
The mergekit config can be found in the files.
# "[What is a Mixture of Experts (MoE)?](https://huggingface.co/blog/moe)"
### (from the MistralAI papers...click the quoted question above to navigate to it directly.)
The scale of a model is one of the most important axes for better model quality. Given a fixed computing budget, training a larger model for fewer steps is better than training a smaller model for more steps.
Mixture of Experts enable models to be pretrained with far less compute, which means you can dramatically scale up the model or dataset size with the same compute budget as a dense model. In particular, a MoE model should achieve the same quality as its dense counterpart much faster during pretraining.
So, what exactly is a MoE? In the context of transformer models, a MoE consists of two main elements:
Sparse MoE layers are used instead of dense feed-forward network (FFN) layers. MoE layers have a certain number of “experts” (e.g. 32 in my "frankenMoE"), where each expert is a neural network. In practice, the experts are FFNs, but they can also be more complex networks or even a MoE itself, leading to hierarchical MoEs!
A gate network or router, that determines which tokens are sent to which expert. For example, in the image below, the token “More” is sent to the second expert, and the token "Parameters” is sent to the first network. As we’ll explore later, we can send a token to more than one expert. How to route a token to an expert is one of the big decisions when working with MoEs - the router is composed of learned parameters and is pretrained at the same time as the rest of the network.
At every layer, for every token, a router network chooses two of these groups (the “experts”) to process the token and combine their output additively.

Switch Layer
MoE layer from the [Switch Transformers paper](https://arxiv.org/abs/2101.03961)
So, to recap, in MoEs we replace every FFN layer of the transformer model with an MoE layer, which is composed of a gate network and a certain number of experts.
Although MoEs provide benefits like efficient pretraining and faster inference compared to dense models, they also come with challenges:
Training: MoEs enable significantly more compute-efficient pretraining, but they’ve historically struggled to generalize during fine-tuning, leading to overfitting.
Inference: Although a MoE might have many parameters, only some of them are used during inference. This leads to much faster inference compared to a dense model with the same number of parameters. However, all parameters need to be loaded in RAM, so memory requirements are high. For example, [given a MoE like Mixtral 8x7B](https://huggingface.co/blog/moe), we’ll need to have enough VRAM to hold a dense 47B parameter model. Why 47B parameters and not 8 x 7B = 56B? That’s because in MoE models, only the FFN layers are treated as individual experts, and the rest of the model parameters are shared. At the same time, assuming just two experts are being used per token, the inference speed (FLOPs) is like using a 12B model (as opposed to a 14B model), because it computes 2x7B matrix multiplications, but with some layers shared (more on this soon).
If all our tokens are sent to just a few popular experts, that will make training inefficient. In a normal MoE training, the gating network converges to mostly activate the same few experts. This self-reinforces as favored experts are trained quicker and hence selected more. To mitigate this, an auxiliary loss is added to encourage giving all experts equal importance. This loss ensures that all experts receive a roughly equal number of training examples. The following sections will also explore the concept of expert capacity, which introduces a threshold of how many tokens can be processed by an expert. In transformers, the auxiliary loss is exposed via the aux_loss parameter.
## "Wait...but you called this a frankenMoE?"
The difference between MoE and "frankenMoE" lies in the fact that the router layer in a model like the one on this repo is not trained simultaneously. |
WIS/poca-SoccerTwos | WIS | 2024-01-23T23:48:13Z | 3 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | 2024-01-23T23:47:41Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: WIS/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
AISimplyExplained/Vakil-7B | AISimplyExplained | 2024-01-23T23:42:47Z | 1,511 | 3 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"legal",
"en",
"dataset:AISimplyExplained/LegalReasoningIndianLaw",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-01-23T12:50:25Z | ---
license: mit
datasets:
- AISimplyExplained/LegalReasoningIndianLaw
language:
- en
library_name: transformers
tags:
- legal
inference: false
---
# Vakil-7B Model Card
### Model Description
Vakil-7B is a state-of-the-art language model fine-tuned on the `AISimplyExplained/LegalReasoningIndianLaw` dataset for specialization in the nuances and complexities of Indian law. It is designed to provide legal professionals, students, and researchers with insights and assistance in understanding legal documents and queries within the context of the Indian legal system.
Developed by Asmi Gulati and Bhuvi Jain, this tool aims to enhance the accessibility and analysis of legal texts, driving forward the digital transformation in the legal domain.
### Model Specifications
- **Developed by:** Asmi Gulati and Bhuvi Jain
- **Model type:** Fine-tuned language model
- **Language(s) (NLP):** English, with a focus on Indian legal terminology
- **License:** MIT
- **Finetuned from model:** `transformers` library model
## Directions for Usage
```python
!pip install "unsloth[colab_ampere] @ git+https://github.com/unslothai/unsloth.git"
!pip install "git+https://github.com/huggingface/transformers.git"
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("AISimplyExplained/Vakil-7B")
model = AutoModelForCausalLM.from_pretrained("AISimplyExplained/Vakil-7B")
```
### Intended Use
Vakil-7B is intended for direct use by legal professionals and researchers who need to interact with Indian legal text. It is designed to assist with legal research, drafting, and education by providing AI-driven analysis and insights.
### Out-of-Scope Use
Vakil-7B is not designed to replace professional legal advice or to be used as a standalone decision-making tool. It should be used as an aid in the legal research and analysis process, not as the sole source of guidance.
## Bias, Risks, and Limitations
Users should be aware of the inherent limitations of AI in interpreting legal text. Vakil-7B, while sophisticated, may not capture all nuances and should be used in conjunction with professional judgment. |
Mukalingam0813/finnish-intent-classifier | Mukalingam0813 | 2024-01-23T23:39:44Z | 44 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-multilingual-cased",
"base_model:finetune:distilbert/distilbert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-01-23T06:59:31Z | ---
license: apache-2.0
base_model: distilbert-base-multilingual-cased
tags:
- generated_from_keras_callback
model-index:
- name: Mukalingam0813/finnish-intent-classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Mukalingam0813/finnish-intent-classifier
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.5299
- Validation Loss: 1.5244
- Train Accuracy: 0.4816
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 24, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.5298 | 1.5244 | 0.4816 | 0 |
| 1.5301 | 1.5244 | 0.4816 | 1 |
| 1.5299 | 1.5244 | 0.4816 | 2 |
### Framework versions
- Transformers 4.36.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
liminerity/Ingot-7b-slerp-5 | liminerity | 2024-01-23T23:29:29Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"liminerity/Ingot-7b-slerp",
"liminerity/Ingot-7b-slerp-3",
"base_model:liminerity/Ingot-7b-slerp",
"base_model:merge:liminerity/Ingot-7b-slerp",
"base_model:liminerity/Ingot-7b-slerp-3",
"base_model:merge:liminerity/Ingot-7b-slerp-3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-23T23:20:28Z | ---
tags:
- merge
- mergekit
- lazymergekit
- liminerity/Ingot-7b-slerp
- liminerity/Ingot-7b-slerp-3
base_model:
- liminerity/Ingot-7b-slerp
- liminerity/Ingot-7b-slerp-3
---
# Ingot-7b-slerp-5
Ingot-7b-slerp-5 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [liminerity/Ingot-7b-slerp](https://huggingface.co/liminerity/Ingot-7b-slerp)
* [liminerity/Ingot-7b-slerp-3](https://huggingface.co/liminerity/Ingot-7b-slerp-3)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: liminerity/Ingot-7b-slerp
layer_range: [0, 32]
- model: liminerity/Ingot-7b-slerp-3
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/Ingot-7b-slerp
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "liminerity/Ingot-7b-slerp-5"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Abhinav28/large-v3-hi-commonvoice-11-peft-trained-adapter-withfp16-50-percent | Abhinav28 | 2024-01-23T23:22:34Z | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openai/whisper-large-v3",
"base_model:adapter:openai/whisper-large-v3",
"region:us"
] | null | 2024-01-23T23:22:23Z | ---
library_name: peft
base_model: openai/whisper-large-v3
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
liminerity/Ingot-7b-slerp-4 | liminerity | 2024-01-23T23:03:58Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"222gate/Blurred-Beagle-7b-slerp",
"andrijdavid/Macaroni-7b-Tied",
"conversational",
"base_model:andrijdavid/Macaroni-7b-Tied",
"base_model:merge:andrijdavid/Macaroni-7b-Tied",
"base_model:gate369/Blurred-Beagle-7b-slerp",
"base_model:merge:gate369/Blurred-Beagle-7b-slerp",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-23T22:56:18Z | ---
tags:
- merge
- mergekit
- lazymergekit
- 222gate/Blurred-Beagle-7b-slerp
- andrijdavid/Macaroni-7b-Tied
base_model:
- 222gate/Blurred-Beagle-7b-slerp
- andrijdavid/Macaroni-7b-Tied
---
# Ingot-7b-slerp-4
Ingot-7b-slerp-4 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [222gate/Blurred-Beagle-7b-slerp](https://huggingface.co/222gate/Blurred-Beagle-7b-slerp)
* [andrijdavid/Macaroni-7b-Tied](https://huggingface.co/andrijdavid/Macaroni-7b-Tied)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: 222gate/Blurred-Beagle-7b-slerp
layer_range: [0, 32]
- model: andrijdavid/Macaroni-7b-Tied
layer_range: [0, 32]
merge_method: slerp
base_model: 222gate/Blurred-Beagle-7b-slerp
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "liminerity/Ingot-7b-slerp-4"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
ai-maker-space/mistral7b_instruct_generation | ai-maker-space | 2024-01-23T23:02:11Z | 2 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-01-23T23:02:05Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: mistral7b_instruct_generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral7b_instruct_generation
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8004
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.857 | 0.0 | 20 | 1.8252 |
| 1.9363 | 0.01 | 40 | 1.7948 |
| 1.7643 | 0.01 | 60 | 1.7910 |
| 1.861 | 0.01 | 80 | 1.7915 |
| 1.8702 | 0.01 | 100 | 1.7903 |
| 1.8619 | 0.02 | 120 | 1.7905 |
| 1.7669 | 0.02 | 140 | 1.7956 |
| 1.8062 | 0.02 | 160 | 1.7895 |
| 1.7802 | 0.03 | 180 | 1.7958 |
| 1.7773 | 0.03 | 200 | 1.7855 |
| 1.8692 | 0.03 | 220 | 1.7936 |
| 1.7815 | 0.03 | 240 | 1.7939 |
| 1.8642 | 0.04 | 260 | 1.7990 |
| 1.8715 | 0.04 | 280 | 1.7953 |
| 1.8999 | 0.04 | 300 | 1.7999 |
| 1.7691 | 0.04 | 320 | 1.7919 |
| 1.7743 | 0.05 | 340 | 1.7973 |
| 1.7692 | 0.05 | 360 | 1.7919 |
| 1.954 | 0.05 | 380 | 1.7934 |
| 1.8872 | 0.06 | 400 | 1.7966 |
| 1.8925 | 0.06 | 420 | 1.7942 |
| 1.8384 | 0.06 | 440 | 1.7949 |
| 1.825 | 0.06 | 460 | 1.7982 |
| 1.8151 | 0.07 | 480 | 1.7959 |
| 1.8599 | 0.07 | 500 | 1.8004 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.37.0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0 |
toinoudu14/PPO_Unit1 | toinoudu14 | 2024-01-23T22:59:19Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-01-23T22:59:02Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 259.17 +/- 24.04
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
RiverTest/RiverMTG23 | RiverTest | 2024-01-23T22:53:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-23T22:53:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
KukuruJPN/Oliver_Atom | KukuruJPN | 2024-01-23T22:46:09Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2024-01-23T22:39:49Z | ---
license: other
license_name: msamsm
license_link: LICENSE
---
|
imperatrona/epiCRealism5 | imperatrona | 2024-01-23T22:43:06Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-01-23T11:22:11Z | ---
license: creativeml-openrail-m
---
|
tali1/autotrain-gpt2-gpu1 | tali1 | 2024-01-23T22:37:00Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"autotrain",
"text-generation",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-23T22:36:59Z | ---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
liminerity/Ingot-7b-slerp-2 | liminerity | 2024-01-23T22:30:03Z | 7 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"liminerity/Ingot-7b-slerp",
"flemmingmiguel/MBX-7B",
"base_model:flemmingmiguel/MBX-7B",
"base_model:merge:flemmingmiguel/MBX-7B",
"base_model:liminerity/Ingot-7b-slerp",
"base_model:merge:liminerity/Ingot-7b-slerp",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-23T22:21:42Z | ---
tags:
- merge
- mergekit
- lazymergekit
- liminerity/Ingot-7b-slerp
- flemmingmiguel/MBX-7B
base_model:
- liminerity/Ingot-7b-slerp
- flemmingmiguel/MBX-7B
---
# Ingot-7b-slerp-2
Ingot-7b-slerp-2 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [liminerity/Ingot-7b-slerp](https://huggingface.co/liminerity/Ingot-7b-slerp)
* [flemmingmiguel/MBX-7B](https://huggingface.co/flemmingmiguel/MBX-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: liminerity/Ingot-7b-slerp
layer_range: [0, 32]
- model: flemmingmiguel/MBX-7B
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/Ingot-7b-slerp
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "liminerity/Ingot-7b-slerp-2"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Nerdofdot/Nerdofdot_nickprock_mmarco-bert-base-italian-uncased_TM_FTM | Nerdofdot | 2024-01-23T22:20:22Z | 47 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-01-23T22:20:03Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 7975 with parameters:
```
{'batch_size': 10, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 0.4}
```
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 2392,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
DaRkSpyro/HiccupHowToTrainYourDragon | DaRkSpyro | 2024-01-23T22:12:46Z | 0 | 0 | flair | [
"flair",
"music",
"en",
"dataset:HuggingFaceM4/WebSight",
"license:apache-2.0",
"region:us"
] | null | 2024-01-23T22:11:44Z | ---
license: apache-2.0
datasets:
- HuggingFaceM4/WebSight
language:
- en
metrics:
- accuracy
library_name: flair
tags:
- music
--- |
Shijia/flan_biomedidal | Shijia | 2024-01-23T22:11:18Z | 118 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:sem_eval_2024_task_2",
"base_model:Shijia/run1",
"base_model:finetune:Shijia/run1",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-01-23T21:25:00Z | ---
license: apache-2.0
base_model: Shijia/run1
tags:
- generated_from_trainer
datasets:
- sem_eval_2024_task_2
metrics:
- accuracy
model-index:
- name: flan_biomedidal
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: sem_eval_2024_task_2
type: sem_eval_2024_task_2
config: sem_eval_2024_task_2_source
split: validation
args: sem_eval_2024_task_2_source
metrics:
- name: Accuracy
type: accuracy
value: 0.5
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan_biomedidal
This model is a fine-tuned version of [Shijia/run1](https://huggingface.co/Shijia/run1) on the sem_eval_2024_task_2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3473
- Accuracy: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 58 | 0.3479 | 0.5 |
| 0.3671 | 2.0 | 116 | 0.3496 | 0.5 |
| 0.3671 | 3.0 | 174 | 0.3486 | 0.5 |
| 0.37 | 4.0 | 232 | 0.3477 | 0.5 |
| 0.37 | 5.0 | 290 | 0.3473 | 0.5 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
BarryFutureman/WildMarcoroni-Variant3-7B | BarryFutureman | 2024-01-23T22:10:32Z | 199 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-23T22:01:46Z | ---
base_model: []
tags:
- mergekit
- merge
license: apache-2.0
---
# WildMarcoroni-Variant2-7B
Merge.
# AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
# AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
# AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
# AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
# AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
# AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
# AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
# AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
# AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
# AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
# AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
# AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA |
onarganogun/videomae-large-fight_22-01-2024 | onarganogun | 2024-01-23T22:08:59Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-large",
"base_model:finetune:MCG-NJU/videomae-large",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2024-01-23T17:28:56Z | ---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-large
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
model-index:
- name: videomae-large-fight_22-01-2024
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-large-fight_22-01-2024
This model is a fine-tuned version of [MCG-NJU/videomae-large](https://huggingface.co/MCG-NJU/videomae-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6263
- Accuracy: 0.8565
- Precision: 0.8502
- Recall: 0.8655
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 9080
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|
| 0.6582 | 0.05 | 454 | 0.6970 | 0.5695 | 0.5660 | 0.5964 |
| 0.6712 | 1.05 | 908 | 0.6281 | 0.6390 | 0.6202 | 0.7175 |
| 0.5664 | 2.05 | 1362 | 0.6718 | 0.6457 | 0.6555 | 0.6143 |
| 0.5645 | 3.05 | 1816 | 0.5835 | 0.7018 | 0.6974 | 0.7130 |
| 0.4259 | 4.05 | 2270 | 0.5497 | 0.7197 | 0.7402 | 0.6771 |
| 0.3542 | 5.05 | 2724 | 0.5509 | 0.7466 | 0.7434 | 0.7534 |
| 0.3676 | 6.05 | 3178 | 0.4956 | 0.7623 | 0.7532 | 0.7803 |
| 0.2656 | 7.05 | 3632 | 0.5263 | 0.7534 | 0.7811 | 0.7040 |
| 0.4675 | 8.05 | 4086 | 0.5216 | 0.7915 | 0.8009 | 0.7758 |
| 0.1434 | 9.05 | 4540 | 0.4744 | 0.8094 | 0.8136 | 0.8027 |
| 0.1389 | 10.05 | 4994 | 0.5389 | 0.8318 | 0.8274 | 0.8386 |
| 0.3228 | 11.05 | 5448 | 0.5345 | 0.8341 | 0.8599 | 0.7982 |
| 0.1044 | 12.05 | 5902 | 0.5729 | 0.8341 | 0.8465 | 0.8161 |
| 0.0305 | 13.05 | 6356 | 0.5812 | 0.8363 | 0.8378 | 0.8341 |
| 0.1256 | 14.05 | 6810 | 0.5806 | 0.8520 | 0.8489 | 0.8565 |
| 0.2735 | 15.05 | 7264 | 0.5713 | 0.8520 | 0.8618 | 0.8386 |
| 0.2376 | 16.05 | 7718 | 0.6030 | 0.8498 | 0.8578 | 0.8386 |
| 0.2978 | 17.05 | 8172 | 0.6263 | 0.8565 | 0.8502 | 0.8655 |
| 0.3872 | 18.05 | 8626 | 0.6099 | 0.8520 | 0.8489 | 0.8565 |
| 0.6629 | 19.05 | 9080 | 0.6142 | 0.8543 | 0.8496 | 0.8610 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
OdiaGenAI/odiagenAI-bengali-base-model-v1 | OdiaGenAI | 2024-01-23T22:05:10Z | 62 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"bn",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-08T18:55:19Z | ---
license: cc-by-nc-4.0
language:
- bn
---
# Model Card for Model ID
[](https://creativecommons.org/licenses/by-nc-sa/4.0/)
## Model description
odiagenAI-bengali-base-model-v1 is based on Llama-7b and finetuned with 252k Bengali instruction set. The instruction set is translated data from open-source resources, resulting in good Bengali instruction understanding and response generation capabilities.
The code of Bengali data generation and other detailed information can be found in our Github project repository: https://github.com/OdiaGenAI/GenerativeAI_and_LLM_Odia.
## Training hyper-parameters
| Parameter | Value |
| ------ | ------ |
| Batch size | 128 |
| Learning rate | 3e-4 |
| Epochs | 5 |
|Cutoff length | 256 |
|Weight_decay | 0.001 |
|Warmup_rate | 0.1 |
|LR_scheduler | linear |
|Lora r | 16 |
|Lora target modules | (q_proj, k_proj, v_proj, o_proj) |
Instructions for running it can be found at https://github.com/OdiaGenAI/GenerativeAI_and_LLM_Odia.
### Licensing Information
This work is licensed under a
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png
[cc-by-nc-sa-shield]: https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-lightgrey.svg
### Citation Information
If you find this helpful repository, please consider giving 👏 and citing:
```
@misc{OdiaGenAI-Bengali-LLM,
author = {Shantipriya Parida and Sambit Sekhar and Guneet Singh Kohli and Arghyadeep Sen and Shashikanta Sahoo},
title = {Bengali Instruction-Tuning Model},
year = {2023},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/OdiaGenAI}},
}
```
### Contributions
- Shantipriya Parida
- Sambit Sekhar
- Guneet Singh Kohli
- Arghyadeep Sen
- Shashikanta Sahoo |
rouven-spiess/llama2_instruct_generation | rouven-spiess | 2024-01-23T21:54:20Z | 5 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:NousResearch/Llama-2-7b-hf",
"base_model:adapter:NousResearch/Llama-2-7b-hf",
"region:us"
] | null | 2024-01-23T21:53:56Z | ---
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: NousResearch/Llama-2-7b-hf
model-index:
- name: llama2_instruct_generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2_instruct_generation
This model is a fine-tuned version of [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6750
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8413 | 0.0 | 20 | 1.8102 |
| 1.9042 | 0.01 | 40 | 1.7811 |
| 1.8402 | 0.01 | 60 | 1.7657 |
| 1.8856 | 0.01 | 80 | 1.7511 |
| 1.9212 | 0.01 | 100 | 1.7390 |
| 1.807 | 0.02 | 120 | 1.7090 |
| 1.8321 | 0.02 | 140 | 1.7029 |
| 1.871 | 0.02 | 160 | 1.6979 |
| 1.848 | 0.02 | 180 | 1.6947 |
| 1.8378 | 0.03 | 200 | 1.6908 |
| 1.746 | 0.03 | 220 | 1.6893 |
| 1.7568 | 0.03 | 240 | 1.6874 |
| 1.8227 | 0.04 | 260 | 1.6860 |
| 1.8134 | 0.04 | 280 | 1.6835 |
| 1.8026 | 0.04 | 300 | 1.6819 |
| 1.8267 | 0.04 | 320 | 1.6831 |
| 1.7998 | 0.05 | 340 | 1.6816 |
| 1.8747 | 0.05 | 360 | 1.6793 |
| 1.8478 | 0.05 | 380 | 1.6785 |
| 1.8627 | 0.05 | 400 | 1.6776 |
| 1.7956 | 0.06 | 420 | 1.6783 |
| 1.7184 | 0.06 | 440 | 1.6764 |
| 1.7038 | 0.06 | 460 | 1.6753 |
| 1.9049 | 0.07 | 480 | 1.6764 |
| 1.8113 | 0.07 | 500 | 1.6750 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.37.0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0 |
daniel123321/whisper-small-ger | daniel123321 | 2024-01-23T21:47:37Z | 61 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-01-23T14:52:11Z | ---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-small-ger
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-ger
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3770
- Wer: 26.2540
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.7151 | 0.07 | 200 | 0.6823 | 34.0413 |
| 0.5949 | 0.13 | 400 | 0.6183 | 40.4059 |
| 0.5127 | 0.2 | 600 | 0.5119 | 27.8722 |
| 0.4318 | 0.27 | 800 | 0.4223 | 26.6114 |
| 0.352 | 0.33 | 1000 | 0.3770 | 26.2540 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
mayflowergmbh/DiscoPhoenix-7B | mayflowergmbh | 2024-01-23T21:41:30Z | 7 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"DiscoResearch/DiscoLM_German_7b_v1",
"DRXD1000/Phoenix",
"OpenPipe/mistral-ft-optimized-1227",
"base_model:DRXD1000/Phoenix-7B",
"base_model:merge:DRXD1000/Phoenix-7B",
"base_model:DiscoResearch/DiscoLM_German_7b_v1",
"base_model:merge:DiscoResearch/DiscoLM_German_7b_v1",
"base_model:OpenPipe/mistral-ft-optimized-1227",
"base_model:merge:OpenPipe/mistral-ft-optimized-1227",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-23T21:33:42Z | ---
tags:
- merge
- mergekit
- lazymergekit
- DiscoResearch/DiscoLM_German_7b_v1
- DRXD1000/Phoenix
- OpenPipe/mistral-ft-optimized-1227
base_model:
- DiscoResearch/DiscoLM_German_7b_v1
- DRXD1000/Phoenix
- OpenPipe/mistral-ft-optimized-1227
---
# DiscoPhoenix-7B
DiscoPhoenix-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [DiscoResearch/DiscoLM_German_7b_v1](https://huggingface.co/DiscoResearch/DiscoLM_German_7b_v1)
* [DRXD1000/Phoenix](https://huggingface.co/DRXD1000/Phoenix)
* [OpenPipe/mistral-ft-optimized-1227](https://huggingface.co/OpenPipe/mistral-ft-optimized-1227)
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# No parameters necessary for base model
- model: DiscoResearch/DiscoLM_German_7b_v1
parameters:
density: 0.6
weight: 0.3
- model: DRXD1000/Phoenix
parameters:
density: 0.6
weight: 0.3
- model: OpenPipe/mistral-ft-optimized-1227
parameters:
density: 0.6
weight: 0.4
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mayflowergmbh/DiscoPhoenix-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
chavudosoa/test3 | chavudosoa | 2024-01-23T21:41:26Z | 0 | 0 | keras | [
"keras",
"text-generation",
"license:mit",
"region:us"
] | text-generation | 2024-01-23T21:36:59Z | ---
license: mit
library_name: keras
pipeline_tag: text-generation
--- |
adaca001/clasificador-muchocine | adaca001 | 2024-01-23T21:28:19Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"safetensors",
"electra",
"classification",
"generated_from_trainer",
"en",
"dataset:muchocine",
"base_model:mrm8488/electricidad-base-discriminator",
"base_model:adapter:mrm8488/electricidad-base-discriminator",
"region:us"
] | null | 2024-01-23T20:13:29Z | ---
base_model: mrm8488/electricidad-base-discriminator
tags:
- classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: clasificador-muchocine
results: []
datasets:
- muchocine
language:
- en
library_name: adapter-transformers
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4141
- Accuracy: 0.3639
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.5202 | 0.3381 |
| 1.5131 | 2.0 | 776 | 1.4459 | 0.3394 |
| 1.3789 | 3.0 | 1164 | 1.4141 | 0.3639 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0 |
deadniell/fade_valorant_latam_v3 | deadniell | 2024-01-23T21:19:12Z | 0 | 1 | null | [
"valorant",
"español latino",
"riot games",
"es",
"license:openrail",
"region:us"
] | null | 2024-01-23T21:17:39Z | ---
license: openrail
language:
- es
tags:
- valorant
- español latino
- riot games
--- |
hojzas/proj8-mistral-new | hojzas | 2024-01-23T21:12:26Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-23T19:54:10Z | ---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
LoneStriker/code-millenials-34b-8.0bpw-h8-exl2 | LoneStriker | 2024-01-23T21:12:09Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"code",
"license:llama2",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-23T20:58:11Z | ---
license: llama2
library_name: transformers
tags:
- code
model-index:
- name: Code Millenials
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: 0.8048
verified: false
---
# Bud Code Millenials 34B
Welcome to our Code Model repository! Our model is specifically fine-tuned for code generation tasks. Bud Millenial Code Gen open-source models are currently the State of the Art (SOTA) for code generation, beating all the existing models of all sizes. We have achieved a HumanEval value of 80.48 @ Pass 1, beating proprietary models like Gemini Ultra, Claude, GPT-3.5 etc. by a large margin, and on par with GPT-4 (HumanEval ~ 82. Ref. WizardCoder). Our proprietary model (Bud Code Jr) beats GPT-4 as well with a HumanEval value of 88.2 & a context size of 168K, we will be releasing an API for Researchers, Enterprises, and potential Partners by January 2024 end. If interested, please reach out to [email protected]
### News 🔥🔥🔥
- [2024/01/09] We released **Code Millenials 3B** , which achieves the **56.09 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
- [2024/01/09] We released **Code Millenials 1B** , which achieves the **51.82 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
- [2024/01/03] We released **Code Millenials 34B** , which achieves the **80.48 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
- [2024/01/02] We released **Code Millenials 13B** , which achieves the **76.21 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
### HumanEval
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/BudEcosystem/code-millenials/main/assets/result.png" alt="CodeMillenials" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a>
</p>
For the millenial models, the eval script in the github repo is used for the above result.
Note: The humaneval values of other models are taken from the official repos of [WizardCoder](https://github.com/nlpxucan/WizardLM), [DeepseekCoder](https://github.com/deepseek-ai/deepseek-coder), [Gemini](https://deepmind.google/technologies/gemini/#capabilities) etc.
### Models
| Model | Checkpoint | HumanEval (+) | MBPP (+) |
|---------|-------------|---------------|----------|
|Code Millenials 34B | <a href="https://huggingface.co/budecosystem/code-millenials-34b" target="_blank">HF Link</a> | 80.48 (75) | 74.68 (62.9) |
|Code Millenials 13B | <a href="https://huggingface.co/budecosystem/code-millenials-13b" target="_blank">HF Link</a> | 76.21 (69.5) | 70.17 (57.6) |
|Code Millenials 3B | <a href="https://huggingface.co/budecosystem/code-millenials-3b" target="_blank">HF Link</a> | 56.09 (52.43) | 55.13 (47.11) |
|Code Millenials 1B | <a href="https://huggingface.co/budecosystem/code-millenials-1b" target="_blank">HF Link</a> | 51.82 (48.17) | 53.13 (44.61) |
### 🚀 Quick Start
Inference code using the pre-trained model from the Hugging Face model hub
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("budecosystem/code-millenials-34b")
model = AutoModelForCausalLM.from_pretrained("budecosystem/code-millenials-34b")
template = """A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
### Instruction: {instruction}
### Response:"""
instruction = <Your code instruction here>
prompt = template.format(instruction=instruction)
inputs = tokenizer(prompt, return_tensors="pt")
sample = model.generate(**inputs, max_length=128)
print(tokenizer.decode(sample[0]))
```
## Training details
The model is trained of 16 A100 80GB for approximately 50hrs.
| Hyperparameters | Value |
| :----------------------------| :-----: |
| per_device_train_batch_size | 16 |
| gradient_accumulation_steps | 1 |
| epoch | 3 |
| steps | 2157 |
| learning_rate | 2e-5 |
| lr schedular type | cosine |
| warmup ratio | 0.1 |
| optimizer | adamw |
| fp16 | True |
| GPU | 16 A100 80GB |
### Important Note
- **Bias, Risks, and Limitations:** Model may sometimes make errors, produce misleading contents, or struggle to manage tasks that are not related to coding. |
Weni/WeniGPT-2.2.1-Zephyr-7B-1-epoch-merged-LLM_Base_2.0.3_SFT | Weni | 2024-01-23T21:10:40Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-23T21:06:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CLMBR/npi-only-transformer-1 | CLMBR | 2024-01-23T21:00:38Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-18T14:29:46Z | ---
tags:
- generated_from_trainer
model-index:
- name: npi-only-transformer-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# npi-only-transformer-1
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.224 | 0.03 | 76320 | 4.1960 |
| 4.021 | 1.03 | 152640 | 4.0259 |
| 3.9128 | 0.03 | 228960 | 3.9511 |
| 3.8414 | 1.03 | 305280 | 3.9098 |
| 3.7915 | 0.03 | 381600 | 3.8843 |
| 3.7489 | 1.03 | 457920 | 3.8684 |
| 3.7172 | 0.03 | 534240 | 3.8578 |
| 3.6886 | 1.03 | 610560 | 3.8503 |
| 3.6592 | 0.03 | 686880 | 3.8463 |
| 3.635 | 1.03 | 763200 | 3.8440 |
| 3.6089 | 0.03 | 839520 | 3.8414 |
| 3.5858 | 1.03 | 915840 | 3.8406 |
| 3.5679 | 0.03 | 992160 | 3.8411 |
| 3.5481 | 1.03 | 1068480 | 3.8400 |
| 3.5304 | 0.03 | 1144800 | 3.8423 |
| 3.5273 | 1.03 | 1221120 | 3.8431 |
| 3.5084 | 0.03 | 1297440 | 3.8436 |
| 3.4931 | 1.03 | 1373760 | 3.8460 |
| 3.4817 | 0.03 | 1450080 | 3.8460 |
| 3.4695 | 1.03 | 1526400 | 3.8482 |
| 3.4604 | 0.03 | 1602720 | 3.8497 |
| 3.451 | 0.03 | 1679040 | 3.8507 |
| 3.4443 | 1.03 | 1755360 | 3.8523 |
| 3.4359 | 0.03 | 1831680 | 3.8535 |
| 3.4238 | 1.03 | 1908000 | 3.8556 |
| 3.4097 | 0.03 | 1984320 | 3.8569 |
| 3.3949 | 1.03 | 2060640 | 3.8573 |
| 3.3833 | 0.03 | 2136960 | 3.8597 |
| 3.373 | 1.03 | 2213280 | 3.8602 |
| 3.3626 | 0.03 | 2289600 | 3.8611 |
| 3.3495 | 1.03 | 2365920 | 3.8634 |
| 3.3497 | 0.03 | 2442240 | 3.8635 |
| 3.3351 | 1.03 | 2518560 | 3.8644 |
| 3.3289 | 0.03 | 2594880 | 3.8649 |
| 3.3182 | 1.03 | 2671200 | 3.8660 |
| 3.3091 | 0.03 | 2747520 | 3.8667 |
| 3.3031 | 1.03 | 2823840 | 3.8655 |
| 3.2978 | 0.03 | 2900160 | 3.8657 |
| 3.2938 | 0.03 | 2976480 | 3.8646 |
| 3.2916 | 0.02 | 3052726 | 3.8634 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
linoyts/2000_ads_offset_noise_3 | linoyts | 2024-01-23T20:54:15Z | 50 | 1 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-01-23T20:24:07Z | ---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: '<s0><s1> ad of a llama wearing headphones'
output:
url:
"image_0.png"
- text: '<s0><s1> ad of a llama wearing headphones'
output:
url:
"image_1.png"
- text: '<s0><s1> ad of a llama wearing headphones'
output:
url:
"image_2.png"
- text: '<s0><s1> ad of a llama wearing headphones'
output:
url:
"image_3.png"
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: an ad in the style of <s0><s1>
license: openrail++
---
# SDXL LoRA DreamBooth - linoyts/2000_ads_offset_noise_3
<Gallery />
## Model description
### These are linoyts/2000_ads_offset_noise_3 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`2000_ads_offset_noise_3.safetensors` here 💾](/linoyts/2000_ads_offset_noise_3/blob/main/2000_ads_offset_noise_3.safetensors)**.
- Place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:2000_ads_offset_noise_3:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
- *Embeddings*: download **[`2000_ads_offset_noise_3_emb.safetensors` here 💾](/linoyts/2000_ads_offset_noise_3/blob/main/2000_ads_offset_noise_3_emb.safetensors)**.
- Place it on it on your `embeddings` folder
- Use it by adding `2000_ads_offset_noise_3_emb` to your prompt. For example, `an ad in the style of 2000_ads_offset_noise_3_emb`
(you need both the LoRA and the embeddings as they were trained together for this LoRA)
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('linoyts/2000_ads_offset_noise_3', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='linoyts/2000_ads_offset_noise_3', filename='2000_ads_offset_noise_3_emb.safetensors' repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
image = pipeline('<s0><s1> ad of a llama wearing headphones').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` → use `<s0><s1>` in your prompt
## Details
All [Files & versions](/linoyts/2000_ads_offset_noise_3/tree/main).
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
Weni/WeniGPT-2.2.1-Zephyr-7B-1-epoch-merge-LLM_Base_2.0.3_SFT | Weni | 2024-01-23T20:53:36Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-23T19:38:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Felipo/dqn-SpaceInvadersNoFrameskip-v4 | Felipo | 2024-01-23T20:53:20Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-01-23T20:52:50Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 510.00 +/- 166.78
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Felipo -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Felipo -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Felipo
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
mayflowergmbh/Hessian-Disco-Daredevil-7B | mayflowergmbh | 2024-01-23T20:51:51Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"DiscoResearch/DiscoLM_German_7b_v1",
"shadowml/Daredevil-7B",
"base_model:DiscoResearch/DiscoLM_German_7b_v1",
"base_model:merge:DiscoResearch/DiscoLM_German_7b_v1",
"base_model:shadowml/Daredevil-7B",
"base_model:merge:shadowml/Daredevil-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-23T20:44:00Z | ---
tags:
- merge
- mergekit
- lazymergekit
- DiscoResearch/DiscoLM_German_7b_v1
- shadowml/Daredevil-7B
base_model:
- DiscoResearch/DiscoLM_German_7b_v1
- shadowml/Daredevil-7B
---
# Hessian-Disco-Daredevil-7B
Hessian-Disco-Daredevil-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [DiscoResearch/DiscoLM_German_7b_v1](https://huggingface.co/DiscoResearch/DiscoLM_German_7b_v1)
* [shadowml/Daredevil-7B](https://huggingface.co/shadowml/Daredevil-7B)
## 🧩 Configuration
```yaml
models:
- model: LeoLM/leo-mistral-hessianai-7b
# No parameters necessary for base model
- model: DiscoResearch/DiscoLM_German_7b_v1
parameters:
density: 0.62
weight: 0.55
- model: shadowml/Daredevil-7B
parameters:
density: 0.56
weight: 0.55
merge_method: dare_ties
base_model: LeoLM/leo-mistral-hessianai-7b
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mayflowergmbh/Hessian-Disco-Daredevil-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
atsstagram/distilbert-base-uncased-finetuned-emotion-balanced-1000plus3000 | atsstagram | 2024-01-23T20:42:17Z | 89 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-01-23T12:23:47Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion-balanced-1000plus3000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion-balanced-1000plus3000
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2550
- Accuracy: 0.582
- F1: 0.5952
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.6372 | 1.0 | 63 | 1.3888 | 0.496 | 0.5009 |
| 1.3098 | 2.0 | 126 | 1.2550 | 0.582 | 0.5952 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.1.0+cu121
- Datasets 1.16.1
- Tokenizers 0.15.0
|
LoneStriker/code-millenials-34b-4.65bpw-h6-exl2 | LoneStriker | 2024-01-23T20:38:36Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"code",
"license:llama2",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-23T20:30:18Z | ---
license: llama2
library_name: transformers
tags:
- code
model-index:
- name: Code Millenials
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: 0.8048
verified: false
---
# Bud Code Millenials 34B
Welcome to our Code Model repository! Our model is specifically fine-tuned for code generation tasks. Bud Millenial Code Gen open-source models are currently the State of the Art (SOTA) for code generation, beating all the existing models of all sizes. We have achieved a HumanEval value of 80.48 @ Pass 1, beating proprietary models like Gemini Ultra, Claude, GPT-3.5 etc. by a large margin, and on par with GPT-4 (HumanEval ~ 82. Ref. WizardCoder). Our proprietary model (Bud Code Jr) beats GPT-4 as well with a HumanEval value of 88.2 & a context size of 168K, we will be releasing an API for Researchers, Enterprises, and potential Partners by January 2024 end. If interested, please reach out to [email protected]
### News 🔥🔥🔥
- [2024/01/09] We released **Code Millenials 3B** , which achieves the **56.09 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
- [2024/01/09] We released **Code Millenials 1B** , which achieves the **51.82 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
- [2024/01/03] We released **Code Millenials 34B** , which achieves the **80.48 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
- [2024/01/02] We released **Code Millenials 13B** , which achieves the **76.21 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
### HumanEval
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/BudEcosystem/code-millenials/main/assets/result.png" alt="CodeMillenials" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a>
</p>
For the millenial models, the eval script in the github repo is used for the above result.
Note: The humaneval values of other models are taken from the official repos of [WizardCoder](https://github.com/nlpxucan/WizardLM), [DeepseekCoder](https://github.com/deepseek-ai/deepseek-coder), [Gemini](https://deepmind.google/technologies/gemini/#capabilities) etc.
### Models
| Model | Checkpoint | HumanEval (+) | MBPP (+) |
|---------|-------------|---------------|----------|
|Code Millenials 34B | <a href="https://huggingface.co/budecosystem/code-millenials-34b" target="_blank">HF Link</a> | 80.48 (75) | 74.68 (62.9) |
|Code Millenials 13B | <a href="https://huggingface.co/budecosystem/code-millenials-13b" target="_blank">HF Link</a> | 76.21 (69.5) | 70.17 (57.6) |
|Code Millenials 3B | <a href="https://huggingface.co/budecosystem/code-millenials-3b" target="_blank">HF Link</a> | 56.09 (52.43) | 55.13 (47.11) |
|Code Millenials 1B | <a href="https://huggingface.co/budecosystem/code-millenials-1b" target="_blank">HF Link</a> | 51.82 (48.17) | 53.13 (44.61) |
### 🚀 Quick Start
Inference code using the pre-trained model from the Hugging Face model hub
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("budecosystem/code-millenials-34b")
model = AutoModelForCausalLM.from_pretrained("budecosystem/code-millenials-34b")
template = """A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
### Instruction: {instruction}
### Response:"""
instruction = <Your code instruction here>
prompt = template.format(instruction=instruction)
inputs = tokenizer(prompt, return_tensors="pt")
sample = model.generate(**inputs, max_length=128)
print(tokenizer.decode(sample[0]))
```
## Training details
The model is trained of 16 A100 80GB for approximately 50hrs.
| Hyperparameters | Value |
| :----------------------------| :-----: |
| per_device_train_batch_size | 16 |
| gradient_accumulation_steps | 1 |
| epoch | 3 |
| steps | 2157 |
| learning_rate | 2e-5 |
| lr schedular type | cosine |
| warmup ratio | 0.1 |
| optimizer | adamw |
| fp16 | True |
| GPU | 16 A100 80GB |
### Important Note
- **Bias, Risks, and Limitations:** Model may sometimes make errors, produce misleading contents, or struggle to manage tasks that are not related to coding. |
atsstagram/distilbert-base-uncased-finetuned-emotion-4000 | atsstagram | 2024-01-23T20:36:15Z | 89 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-01-22T22:18:42Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion-4000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion-4000
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0339
- Accuracy: 0.6245
- F1: 0.5290
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.4379 | 1.0 | 63 | 1.1661 | 0.5765 | 0.4513 |
| 1.0777 | 2.0 | 126 | 1.0339 | 0.6245 | 0.5290 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.1.0+cu121
- Datasets 1.16.1
- Tokenizers 0.15.0
|
LoneStriker/code-millenials-34b-4.0bpw-h6-exl2 | LoneStriker | 2024-01-23T20:30:16Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"code",
"license:llama2",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-23T20:23:03Z | ---
license: llama2
library_name: transformers
tags:
- code
model-index:
- name: Code Millenials
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: 0.8048
verified: false
---
# Bud Code Millenials 34B
Welcome to our Code Model repository! Our model is specifically fine-tuned for code generation tasks. Bud Millenial Code Gen open-source models are currently the State of the Art (SOTA) for code generation, beating all the existing models of all sizes. We have achieved a HumanEval value of 80.48 @ Pass 1, beating proprietary models like Gemini Ultra, Claude, GPT-3.5 etc. by a large margin, and on par with GPT-4 (HumanEval ~ 82. Ref. WizardCoder). Our proprietary model (Bud Code Jr) beats GPT-4 as well with a HumanEval value of 88.2 & a context size of 168K, we will be releasing an API for Researchers, Enterprises, and potential Partners by January 2024 end. If interested, please reach out to [email protected]
### News 🔥🔥🔥
- [2024/01/09] We released **Code Millenials 3B** , which achieves the **56.09 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
- [2024/01/09] We released **Code Millenials 1B** , which achieves the **51.82 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
- [2024/01/03] We released **Code Millenials 34B** , which achieves the **80.48 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
- [2024/01/02] We released **Code Millenials 13B** , which achieves the **76.21 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
### HumanEval
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/BudEcosystem/code-millenials/main/assets/result.png" alt="CodeMillenials" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a>
</p>
For the millenial models, the eval script in the github repo is used for the above result.
Note: The humaneval values of other models are taken from the official repos of [WizardCoder](https://github.com/nlpxucan/WizardLM), [DeepseekCoder](https://github.com/deepseek-ai/deepseek-coder), [Gemini](https://deepmind.google/technologies/gemini/#capabilities) etc.
### Models
| Model | Checkpoint | HumanEval (+) | MBPP (+) |
|---------|-------------|---------------|----------|
|Code Millenials 34B | <a href="https://huggingface.co/budecosystem/code-millenials-34b" target="_blank">HF Link</a> | 80.48 (75) | 74.68 (62.9) |
|Code Millenials 13B | <a href="https://huggingface.co/budecosystem/code-millenials-13b" target="_blank">HF Link</a> | 76.21 (69.5) | 70.17 (57.6) |
|Code Millenials 3B | <a href="https://huggingface.co/budecosystem/code-millenials-3b" target="_blank">HF Link</a> | 56.09 (52.43) | 55.13 (47.11) |
|Code Millenials 1B | <a href="https://huggingface.co/budecosystem/code-millenials-1b" target="_blank">HF Link</a> | 51.82 (48.17) | 53.13 (44.61) |
### 🚀 Quick Start
Inference code using the pre-trained model from the Hugging Face model hub
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("budecosystem/code-millenials-34b")
model = AutoModelForCausalLM.from_pretrained("budecosystem/code-millenials-34b")
template = """A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
### Instruction: {instruction}
### Response:"""
instruction = <Your code instruction here>
prompt = template.format(instruction=instruction)
inputs = tokenizer(prompt, return_tensors="pt")
sample = model.generate(**inputs, max_length=128)
print(tokenizer.decode(sample[0]))
```
## Training details
The model is trained of 16 A100 80GB for approximately 50hrs.
| Hyperparameters | Value |
| :----------------------------| :-----: |
| per_device_train_batch_size | 16 |
| gradient_accumulation_steps | 1 |
| epoch | 3 |
| steps | 2157 |
| learning_rate | 2e-5 |
| lr schedular type | cosine |
| warmup ratio | 0.1 |
| optimizer | adamw |
| fp16 | True |
| GPU | 16 A100 80GB |
### Important Note
- **Bias, Risks, and Limitations:** Model may sometimes make errors, produce misleading contents, or struggle to manage tasks that are not related to coding. |
am-infoweb/rap_phase2_22jan_5i_v1 | am-infoweb | 2024-01-23T20:28:38Z | 90 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-01-23T16:47:59Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: rap_phase2_22jan_5i_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rap_phase2_22jan_5i_v1
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0219
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 11
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.3647 | 1.0 | 5010 | 0.0825 |
| 0.0693 | 2.0 | 10020 | 0.0517 |
| 0.0228 | 3.0 | 15030 | 0.0656 |
| 0.0288 | 4.0 | 20040 | 0.0327 |
| 0.0387 | 5.0 | 25050 | 0.0448 |
| 0.0171 | 6.0 | 30060 | 0.0207 |
| 0.0136 | 7.0 | 35070 | 0.0163 |
| 0.0059 | 8.0 | 40080 | 0.0200 |
| 0.0062 | 9.0 | 45090 | 0.0243 |
| 0.0002 | 10.0 | 50100 | 0.0233 |
| 0.002 | 11.0 | 55110 | 0.0219 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
atsstagram/distilbert-base-uncased-finetuned-emotion-1000 | atsstagram | 2024-01-23T20:28:21Z | 89 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-01-22T21:59:25Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion-1000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion-1000
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5651
- Accuracy: 0.3645
- F1: 0.2093
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.7021 | 1.0 | 16 | 1.5978 | 0.352 | 0.1833 |
| 1.6016 | 2.0 | 32 | 1.5651 | 0.3645 | 0.2093 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.1.0+cu121
- Datasets 1.16.1
- Tokenizers 0.15.0
|
nextgeo/address-extraction | nextgeo | 2024-01-23T20:25:07Z | 153 | 1 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"ner",
"pytorch",
"turkish",
"tr",
"dbmdz",
"bert-base-cased",
"bert-base-turkish-cased",
"base_model:dbmdz/bert-base-turkish-cased",
"base_model:finetune:dbmdz/bert-base-turkish-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-01-22T20:44:41Z | ---
license: mit
base_model: dbmdz/bert-base-turkish-cased
pipeline_tag: token-classification
library_name: transformers
tags:
- ner
- token-classification
- pytorch
- turkish
- tr
- dbmdz
- bert
- bert-base-cased
- bert-base-turkish-cased
widget:
- text: "Bağlarbaşı Mahallesi, Zübeyde Hanım Caddesi No: 10 / 3 34710 Üsküdar/İstanbul"
---
# address-extraction

This is a simple library to extract addresses from text. The train.py file contains the code to train but is just included for reference, not to be run. The model is trained on our own dataset of addresses, which is not included in this repo. There is also predict.py which is a simple script to run the model on a single address.
The model is based on [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) from [Hugging Face](https://huggingface.co/).
## Example Results
```
(g:\projects\address-extraction\venv) G:\projects\address-extraction>python predict.py
Osmangazi Mahallesi, Hoca Ahmet Yesevi Cd. No:34, 16050 Osmangazi/Bursa
Osmangazi Mahalle 98.80%
Hoca Ahmet Yesevi Cadde 98.55%
34 Bina Numarası 99.50%
16050 Posta Kodu 98.49%
Osmangazi İlçe 98.71%
Bursa İl 99.21%
Average Score: 0.9874102413654328
Labels Found: 6
----------------------------------------------------------------------
Karşıyaka Mahallesi, Mavişehir Caddesi No: 91, Daire 4, 35540 Karşıyaka/İzmir
Karşıyaka Mahalle 98.93%
Mavişehir Cadde 96.90%
91 Bina Numarası 99.25%
4 Bina Numarası 30.75%
35540 Posta Kodu 98.97%
Karşıyaka İlçe 98.84%
İzmir İl 98.86%
Average Score: 0.9173339426517486
Labels Found: 7
----------------------------------------------------------------------
Selçuklu Mahallesi, Atatürk Bulvarı No: 55, 42050 Selçuklu/Konya
Selçuklu Mahalle 98.53%
Atatürk Cadde 47.01%
55 Bina Numarası 99.49%
42050 Posta Kodu 98.78%
Selçuklu İlçe 98.74%
Konya İl 99.16%
Average Score: 0.9240859523415565
Labels Found: 6
----------------------------------------------------------------------
Alsancak Mahallesi, 1475. Sk. No:3, 35220 Konak/İzmir
Alsancak Mahalle 99.35%
1475 Sokak 97.71%
3 Bina Numarası 99.18%
35220 Posta Kodu 99.00%
Konak İlçe 98.90%
İzmir İl 98.95%
Average Score: 0.9881603717803955
Labels Found: 6
----------------------------------------------------------------------
Kocatepe Mahallesi, Yaşam Caddesi 3. Sokak No:4, 06420 Bayrampaşa/İstanbul
Kocatepe Mahalle 99.44%
Yaşam Cadde 92.45%
3 Sokak 70.61%
4 Bina Numarası 99.18%
06420 Posta Kodu 99.00%
Bayrampaşa İlçe 98.86%
İstanbul İl 98.90%
Average Score: 0.9558616995811462
Labels Found: 7
----------------------------------------------------------------------
```
## Installation & Usage
The environment.yml file contains the conda environment used to run the model. Environment is configured to use cuda enabled gpus but should work with no gpus too. To run the model, you can use the following commands:
```bash
conda env create -f environment.yml -p ./condaenv
conda activate ./condaenv
python predict.py
```
## License
This project is licensed under the terms of the MIT license. |
llmixer/BigWeave-v9-90b-4.0bpw-h8-exl2 | llmixer | 2024-01-23T20:23:34Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"4.0bpw",
"h8",
"exl2",
"conversational",
"en",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-23T19:42:03Z | ---
license: llama2
language:
- en
pipeline_tag: conversational
tags:
- 4.0bpw
- h8
- exl2
---
Exllamav2 4.0bpw h8 quant for [BigWeave-v9-90b](https://huggingface.co/llmixer/BigWeave-v9-90b).
Calibration dataset: [llmixer/20k_random_data](https://huggingface.co/datasets/llmixer/20k_random_data)
|
nbeerbower/bruphin-alpha | nbeerbower | 2024-01-23T20:15:15Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"base_model:cognitivecomputations/dolphin-2.2.1-mistral-7b",
"base_model:finetune:cognitivecomputations/dolphin-2.2.1-mistral-7b",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-19T01:46:29Z | ---
license: apache-2.0
base_model:
- cognitivecomputations/dolphin-2.2.1-mistral-7b
- rwitz/go-bruins-v2
tags:
- merge
---
Simple linear merge of ehartford/dolphin-2.2.1-mistral-7b and rwitz/go-bruins-v2 using mergekit(yaml file is included). |
C-Stuti/temp_model_output_dir | C-Stuti | 2024-01-23T20:12:45Z | 92 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-01-23T20:12:29Z | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: temp_model_output_dir
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# temp_model_output_dir
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7204
- Precision: 0.8552
- Recall: 0.8448
- F1: 0.8399
- Accuracy: 0.8448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8.8e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 1.209 | 1.0 | 756 | 0.7528 | 0.8238 | 0.8130 | 0.8013 | 0.8130 |
| 0.7337 | 2.0 | 1512 | 0.7899 | 0.8209 | 0.8031 | 0.7952 | 0.8031 |
| 0.644 | 3.0 | 2268 | 0.7417 | 0.8394 | 0.8299 | 0.8238 | 0.8299 |
| 0.4777 | 4.0 | 3024 | 0.7204 | 0.8552 | 0.8448 | 0.8399 | 0.8448 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
Subsets and Splits